Science.gov

Sample records for adjusting model parameters

  1. An approach to adjustment of relativistic mean field model parameters

    NASA Astrophysics Data System (ADS)

    Bayram, Tuncay; Akkoyun, Serkan

    2017-09-01

    The Relativistic Mean Field (RMF) model with a small number of adjusted parameters is powerful tool for correct predictions of various ground-state nuclear properties of nuclei. Its success for describing nuclear properties of nuclei is directly related with adjustment of its parameters by using experimental data. In the present study, the Artificial Neural Network (ANN) method which mimics brain functionality has been employed for improvement of the RMF model parameters. In particular, the understanding capability of the ANN method for relations between the RMF model parameters and their predictions for binding energies (BEs) of 58Ni and 208Pb have been found in agreement with the literature values.

  2. Tweaking Model Parameters: Manual Adjustment and Self Calibration

    NASA Astrophysics Data System (ADS)

    Schulz, B.; Tuffs, R. J.; Laureijs, R. J.; Lu, N.; Peschke, S. B.; Gabriel, C.; Khan, I.

    2002-12-01

    The reduction of P32 data is not always straight forward and the application of the transient model needs tight control by the user. This paper describes how to access the model parameters within the P32Tools software and how to work with the "Inspect signals per pixel" panel, in order to explore the parameter space and improve the model fit.

  3. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment

    PubMed Central

    Goeritz, Marie L.; Marder, Eve

    2014-01-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. PMID:25008414

  4. A space-variant filter model of texture segregation: parameter adjustment guided by psychophysical data.

    PubMed

    Kehrer, L; Meinecke, C

    2003-03-01

    This article presents a space-variant version of a standard spatial filter model of texture segregation of the "back-pocket" type (i.e., two filter layers with an intermediate pointwise nonlinearity). The model was tested with psychophysical data from experiments with line textures in which target lines differed in orientation from background lines. The textures were presented briefly and then masked. Segregation performance was evaluated along the horizontal meridian up to retinal eccentricities of about 10 deg. Data are reported from two experiments with different line densities (Kehrer 1989) and two experiments with different orientation contrasts between target lines and background lines (Kehrer 1990). Segregation performance proved to depend strongly on these texture variations, and it peaked several degrees from fixation in all cases. The filter model provided satisfactory predictions of experimental data when model parameters were adjusted appropriately. It is concluded (1) that filter models defined in strictly spatial terms (i.e., without temporal properties) offer a sufficient framework to account for the psychophysical data and (2) that the particular course of the performance curve (i.e., the performance peak outside the central region) must be attributed to the characteristics of second-layer filters.

  5. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    NASA Astrophysics Data System (ADS)

    Kutiev, Ivan; Marinov, Pencho; Fidanova, Stefka; Belehaki, Anna; Tsagouri, Ioanna

    2012-12-01

    Validation results on the latest version of TaD model (TaDv2) show realistic reconstruction of the electron density profiles (EDPs) with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  6. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    NASA Astrophysics Data System (ADS)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  7. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    PubMed Central

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-01-01

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132

  8. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    PubMed

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  9. Resonance Parameter Adjustment Based on Integral Experiments

    SciTech Connect

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; Forget, Benoit

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, such as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.

  10. Resonance Parameter Adjustment Based on Integral Experiments

    DOE PAGES

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less

  11. Resonance Parameter Adjustment Based on Integral Experiments

    SciTech Connect

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; Forget, Benoit

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, such as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.

  12. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  13. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  14. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    SciTech Connect

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  15. Sensitivity of adjustment to parameter correlations and to response-parameter correlations

    SciTech Connect

    Wagschal, J.J.

    2011-07-01

    The adjusted parameters and response, and their respective posterior uncertainties and correlations, are presented explicitly as functions of all relevant prior correlations for the two parameters, one response case. The dependence of these adjusted entities on the various prior correlations is analyzed and portrayed graphically for various valid correlation combinations on a simple criticality problem. (authors)

  16. Optimal Linking Design for Response Model Parameters

    ERIC Educational Resources Information Center

    Barrett, Michelle D.; van der Linden, Wim J.

    2017-01-01

    Linking functions adjust for differences between identifiability restrictions used in different instances of the estimation of item response model parameters. These adjustments are necessary when results from those instances are to be compared. As linking functions are derived from estimated item response model parameters, parameter estimation…

  17. 40 CFR 89.108 - Adjustable parameters, requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES Emission Standards and Certification Provisions § 89.108 Adjustable parameters, requirements. (a) Nonroad engines... this subpart. (d) For engines that use noncommercial fuels significantly different than the specified...

  18. Effect of Adjusting Pseudo-Guessing Parameter Estimates on Test Scaling When Item Parameter Drift Is Present

    ERIC Educational Resources Information Center

    Han, Kyung T.; Wells, Craig S.; Hambleton, Ronald K.

    2015-01-01

    In item response theory test scaling/equating with the three-parameter model, the scaling coefficients A and B have no impact on the c-parameter estimates of the test items since the cparameter estimates are not adjusted in the scaling/equating procedure. The main research question in this study concerned how serious the consequences would be if…

  19. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    NASA Astrophysics Data System (ADS)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2017-09-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  20. Adjustment of Tsunami Source Parameters By Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Pires, C.; Miranda, P.

    Tsunami waveforms recorded at tide gauges can be used to adjust tsunami source pa- rameters and, indirectly, seismic focal parameters. Simple inversion methods, based on ray-tracing techniques, only used a small fraction of available information. More elab- orate techniques, based on the Green's functions methods, also have some limitations in their scope. A new methodology, using a variational approach, allows for a much more general inversion, which can directly optimize focal parameters of tsunamigenic earthquakes. Idealized synthetic data and an application to the 1969 Gorringe Earth- quake are used to validate the methodology.

  1. Demand-Adjusted Shelf Availability Parameters: A Second Look.

    ERIC Educational Resources Information Center

    Schwarz, Philip

    1983-01-01

    Data gathered in application of Paul Kantor's demand-adjusted shelf availability model to medium-sized academic library indicate significant differences in shelf availability when data are analyzed by last circulation date, acquisition date, and imprint date, and when they are gathered during periods of low and high use. Ten references are cited.…

  2. Adjustment of endogenous concentrations in pharmacokinetic modeling.

    PubMed

    Bauer, Alexander; Wolfsegger, Martin J

    2014-12-01

    Estimating pharmacokinetic parameters in the presence of an endogenous concentration is not straightforward as cross-reactivity in the analytical methodology prevents differentiation between endogenous and dose-related exogenous concentrations. This article proposes a novel intuitive modeling approach which adequately adjusts for the endogenous concentration. Monte Carlo simulations were carried out based on a two-compartment population pharmacokinetic (PK) model fitted to real data following intravenous administration. A constant and a proportional error model were assumed. The performance of the novel model and the method of straightforward subtraction of the observed baseline concentration from post-dose concentrations were compared in terms of terminal half-life, area under the curve from 0 to infinity, and mean residence time. Mean bias in PK parameters was up to 4.5 times better with the novel model assuming a constant error model and up to 6.5 times better assuming a proportional error model. The simulation study indicates that this novel modeling approach results in less biased and more accurate PK estimates than straightforward subtraction of the observed baseline concentration and overcomes the limitations of previously published approaches.

  3. Kuk's Model Adjusted for Protection and Efficiency

    ERIC Educational Resources Information Center

    Su, Shu-Ching; Sedory, Stephen A.; Singh, Sarjinder

    2015-01-01

    In this article, we adjust the Kuk randomized response model for collecting information on a sensitive characteristic for increased protection and efficiency by making use of forced "yes" and forced "no" responses. We first describe Kuk's model and then the proposed adjustment to Kuk's model. Next, by means of a simulation…

  4. Impact of dose calculation models on radiotherapy outcomes and quality adjusted life years for lung cancer treatment: do we need to measure radiotherapy outcomes to tune the radiobiological parameters of a normal tissue complication probability model?

    PubMed Central

    Docquière, Nicolas; Bondiau, Pierre-Yves; Balosso, Jacques

    2016-01-01

    Background The equivalent uniform dose (EUD) radiobiological model can be applied for lung cancer treatment plans to estimate the tumor control probability (TCP) and the normal tissue complication probability (NTCP) using different dose calculation models. Then, based on the different calculated doses, the quality adjusted life years (QALY) score can be assessed versus the uncomplicated tumor control probability (UTCP) concept in order to predict the overall outcome of the different treatment plans. Methods Nine lung cancer cases were included in this study. For the each patient, two treatments plans were generated. The doses were calculated respectively from pencil beam model, as pencil beam convolution (PBC) turning on 1D density correction with Modified Batho’s (MB) method, and point kernel model as anisotropic analytical algorithm (AAA) using exactly the same prescribed dose, normalized to 100% at isocentre point inside the target and beam arrangements. The radiotherapy outcomes and QALY were compared. The bootstrap method was used to improve the 95% confidence intervals (95% CI) estimation. Wilcoxon paired test was used to calculate P value. Results Compared to AAA considered as more realistic, the PBCMB overestimated the TCP while underestimating NTCP, P<0.05. Thus the UTCP and the QALY score were also overestimated. Conclusions To correlate measured QALY’s obtained from the follow-up of the patients with calculated QALY from DVH metrics, the more accurate dose calculation models should be first integrated in clinical use. Second, clinically measured outcomes are necessary to tune the parameters of the NTCP model used to link the treatment outcome with the QALY. Only after these two steps, the comparison and the ranking of different radiotherapy plans would be possible, avoiding over/under estimation of QALY and any other clinic-biological estimates. PMID:28149761

  5. Dynamic adjustment of hidden node parameters for extreme learning machine.

    PubMed

    Feng, Guorui; Lan, Yuan; Zhang, Xinpeng; Qian, Zhenxing

    2015-02-01

    Extreme learning machine (ELM), proposed by Huang et al., was developed for generalized single hidden layer feedforward networks with a wide variety of hidden nodes. ELMs have been proved very fast and effective especially for solving function approximation problems with a predetermined network structure. However, it may contain insignificant hidden nodes. In this paper, we propose dynamic adjustment ELM (DA-ELM) that can further tune the input parameters of insignificant hidden nodes in order to reduce the residual error. It is proved in this paper that the energy error can be effectively reduced by applying recursive expectation-minimization theorem. In DA-ELM, the input parameters of insignificant hidden node are updated in the decreasing direction of the energy error in each step. The detailed theoretical foundation of DA-ELM is presented in this paper. Experimental results show that the proposed DA-ELM is more efficient than the state-of-art algorithms such as Bayesian ELM, optimally-pruned ELM, two-stage ELM, Levenberg-Marquardt, sensitivity-based linear learning method as well as the preliminary ELM.

  6. Europium Luminescence: Electronic Densities and Superdelocalizabilities for a Unique Adjustment of Theoretical Intensity Parameters

    PubMed Central

    Dutra, José Diogo L.; Lima, Nathalia B. D.; Freire, Ricardo O.; Simas, Alfredo M.

    2015-01-01

    We advance the concept that the charge factors of the simple overlap model and the polarizabilities of Judd-Ofelt theory for the luminescence of europium complexes can be effectively and uniquely modeled by perturbation theory on the semiempirical electronic wave function of the complex. With only three adjustable constants, we introduce expressions that relate: (i) the charge factors to electronic densities, and (ii) the polarizabilities to superdelocalizabilities that we derived specifically for this purpose. The three constants are then adjusted iteratively until the calculated intensity parameters, corresponding to the 5D0→7F2 and 5D0→7F4 transitions, converge to the experimentally determined ones. This adjustment yields a single unique set of only three constants per complex and semiempirical model used. From these constants, we then define a binary outcome acceptance attribute for the adjustment, and show that when the adjustment is acceptable, the predicted geometry is, in average, closer to the experimental one. An important consequence is that the terms of the intensity parameters related to dynamic coupling and electric dipole mechanisms will be unique. Hence, the important energy transfer rates will also be unique, leading to a single predicted intensity parameter for the 5D0→7F6 transition. PMID:26329420

  7. Adjusting STEMS growth model for Wisconsin forests.

    Treesearch

    Margaret R. Holdaway

    1985-01-01

    Describes a simple procedure for adjusting growth in the STEMS regional tree growth model to compensate for subregional differences. Coefficients are reported to adjust Lake States STEMS to the forests of Northern and Central Wisconsin--an area of essentially uniform climate and similar broad physiographic features. Errors are presented for various combinations of...

  8. Zoom lens calibration with zoom- and focus-related intrinsic parameters applied to bundle adjustment

    NASA Astrophysics Data System (ADS)

    Zheng, Shunyi; Wang, Zheng; Huang, Rongyong

    2015-04-01

    A zoom lens is more flexible for photogrammetric measurements under diverse environments than a fixed lens. However, challenges in calibration of zoom-lens cameras preclude the wide use of zoom lenses in the field of close-range photogrammetry. Thus, a novel zoom lens calibration method is proposed in this study. In this method, instead of conducting modeling after monofocal calibrations, we summarize the empirical zoom/focus models of intrinsic parameters first and then incorporate these parameters into traditional collinearity equations to construct the fundamental mathematical model, i.e., collinearity equations with zoom- and focus-related intrinsic parameters. Similar to monofocal calibration, images taken at several combinations of zoom and focus settings are processed in a single self-calibration bundle adjustment. In the self-calibration bundle adjustment, three types of unknowns, namely, exterior orientation parameters, unknown space point coordinates, and model coefficients of the intrinsic parameters, are solved simultaneously. Experiments on three different digital cameras with zoom lenses support the feasibility of the proposed method, and their relative accuracies range from 1:4000 to 1:15,100. Furthermore, the nominal focal length written in the exchangeable image file header is found to lack reliability in experiments. Thereafter, the joint influence of zoom lens instability and zoom recording errors is further analyzed quantitatively. The analysis result is consistent with the experimental result and explains the reason why zoom lens calibration can never have the same accuracy as monofocal self-calibration.

  9. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... must specify in the maintenance instructions how to adjust the engines to achieve emission performance... temperatures. For example, equivalent emissions performance can be measured relative to optimal engine... retarding timing the same number of degrees (relative to optimal engine performance) and using the same rate...

  10. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... must specify in the maintenance instructions how to adjust the engines to achieve emission performance... temperatures. For example, equivalent emissions performance can be measured relative to optimal engine... retarding timing the same number of degrees (relative to optimal engine performance) and using the same rate...

  11. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... must specify in the maintenance instructions how to adjust the engines to achieve emission performance... temperatures. For example, equivalent emissions performance can be measured relative to optimal engine... retarding timing the same number of degrees (relative to optimal engine performance) and using the same rate...

  12. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... certification, selective enforcement audit, or in-use testing to determine compliance with the requirements of... necessary for proper operation of the engine. (e) Tier 1 Category 3 marine engines shall be adjusted... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Prohibited controls,...

  13. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... temperatures. For example, equivalent emissions performance can be measured relative to optimal engine... the lowest fuel consumption and/or maximum firing pressure). In this example, adjustments that... retarding timing the same number of degrees (relative to optimal engine performance) and using the same rate...

  14. Determination and adjustment of drying parameters of Tunisian ceramic bodies

    NASA Astrophysics Data System (ADS)

    Mahmoudi, Salah; Bennour, Ali; Srasra, Ezzeddine; Zargouni, Fouad

    2016-12-01

    This work deals with the mineralogical, physico-chemical and geotechnical analyses of representative Aptian clays in the north-east of Tunisia. X-ray diffraction reveals a predominance of illite (50-60 wt%) associated with kaolinite and interstratified illite/smectite. The accessory minerals detected in raw materials are quartz, calcite and Na-feldspar. The average amounts of silica, alumina and alkalis are 52, 20 and 3.5 wt%, respectively. The contents of lime and iron vary between 4 and 8 wt %. The plasticity test shows medium values of plasticity index (16-28 wt%). The linear drying shrinkage is weak (less than 0.99 wt%) which makes these clays suitable for fast drying. The firing shrinkage and expansion are limited. A lower firing and drying temperature allow significant energy savings. Currently, these clays are used in the industry for manufacturing earthenware tiles. For the optimum exploitation of the clay materials and improvement of production conditions, a mathematical formulationis established for the drying parameters. These models predict drying shrinkage (d), bending strength after drying (b) and residual moisture (r) from initial moisture (m) and pressing pressure (p).

  15. Adjustment parameters in the Betts-Miller scheme of convection over South America

    NASA Astrophysics Data System (ADS)

    Luís Gomes, Jorge; Chou, Sin Chan; Lorena Guida, Lucas

    2013-04-01

    The Eta model has been used operationally at CPTEC/INPE since 1996. This model uses the Betts-Miller-Janjic (BMJ) convection scheme. The BMJ scheme was developed based on the convective adjustment. For the construction of reference temperature and moisture reference profiles, three parameters were defined, namely, the stability weight; the saturation pressure departure values and the adjustment time period. To define an optimum set of parameters over South America, a number of experiments have been carried out at CPTEC/INPE and the better set was adopted for the operational runs. The set of parameters are homogeneous over the domain covered by the model and kept constant for the whole year. These homogeneous specified profiles should provide misleading representations of various vertical structures. In this work the Eta model was configured with 40-km grid sizes and vertical resolution was set to 38 layers. The model domain covers the whole South America and part of Central America. The BMJ was changed to permit different set of parameters values at each model grid. We noted in the control runs that the Equitable threat and bias scores of quantitative precipitation forecasts (QPF) shows a different skills depending of verify region. A pronounced high bias in precipitation forecast was verified at mountain slopes, near the peak over Minas Gerais State, which is located at southeast of Brazil. Experiments were done changing the saturation pressure departure values, only near the mountains peaks. We note that the changes in the saturation pressure departure experiments produced different distributions and amounts of total precipitation. Results indicate that the changes reduced the precipitation bias over the mountains. The Eta model that uses the BMJ scheme has the characteristic to produced most of model total precipitation. The experiments changed the partition of implicit and explicit precipitation.

  16. Adjustment of Sensor Locations During Thermal Property Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Milos, Frank S.; Marschall, Jochen; Rasky, Daniel J. (Technical Monitor)

    1996-01-01

    The temperature dependent thermal properties of a material may be evaluated from transient temperature histories using nonlinear parameter estimation techniques. The usual approach is to minimize the sum of the squared errors between measured and calculated temperatures at specific locations in the body. Temperature measurements are usually made with thermocouples and it is customary to take thermocouple locations as known and fixed during parameter estimation computations. In fact, thermocouple locations are never known exactly. Location errors on the order of the thermocouple wire diameter are intrinsic to most common instrumentation procedures (e.g., inserting a thermocouple into a drilled hole) and additional errors can be expected for delicate materials, difficult installations, large thermocouple beads, etc.. Thermocouple location errors are especially significant when estimating thermal properties of low diffusively materials which can sustain large temperature gradients during testing. In the present work, a parameter estimation formulation is presented which allows for the direct inclusion of thermocouple positions into the primary parameter estimation procedure. It is straightforward to set bounds on thermocouple locations which exclude non-physical locations and are consistent with installation tolerances. Furthermore, bounds may be tightened to an extent consistent with any independent verification of thermocouple location, such as x-raying, and so the procedure is entirely consonant with experimental information. A mathematical outline of the procedure is given and its implementation is illustrated through numerical examples characteristic of light-weight, high-temperature ceramic insulation during transient heating. The efficacy and the errors associated with the procedure are discussed.

  17. 40 CFR 90.112 - Requirement of certification-adjustable parameters.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Requirement of certification-adjustable parameters. 90.112 Section 90.112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Emission Standards and Certification Provisions § 90.112 Requirement of certification—adjustable parameters...

  18. Adjustment of pelvispinal parameters preserves the constant gravity line position.

    PubMed

    Geiger, Emanuel V; Müller, Otto; Niemeyer, Thomas; Kluba, Torsten

    2007-04-01

    There is a high variance in sagittal morphology and complaints between different subjects suffering from spinal disorders. Sagittal spinal alignment and clinical presentation are not closely related. Different parameters have been used to describe the pelvispinal morphology based on standing lateral radiographs. We conducted a study using radiography of the lumbar spine combined with force platform data to examine the correlation between pelvispinal parameters and the gravity line position. Fifty consecutive patients with a mean age of 55 years (18-84 years) were compared to normal controls. Among patients we found a statistically significant correlation between the following spinal parameters: lumbar lordosis and sacral slope (r=0.77; P<0.001), sacral slope and pelvic incidence (r=0.72; P<0.001) and pelvic tilt and overhang (r=-0.93; P<0.001). In patients and controls, the gravity line position was found to be located at 60 and 61%, respectively, of the foot length measured from the great toe, ranging from 53 to 69%, when corrected for the individual foot length. The results indicate that subjects with and without spinal disorders have their gravity line position localised within a very small range despite the high variability for lumbar lordosis and pelvic tilt.

  19. Beyond Adjustment: Parameters of Successful Resolution of Bereavement.

    ERIC Educational Resources Information Center

    Rubin, Simon Shimshon

    The problem of human response to loss is complex. To approach understanding of this process it is valuable to use a number of models. Phenomenologically the application of a temporal matrix divides the reaction into three useful heuristic and empirical stages: initial, acute grief (1-3 months); mourning (1-2 years); and post-mourning, with no set…

  20. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection

    PubMed Central

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-01-01

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671

  1. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection.

    PubMed

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-08-28

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.

  2. 40 CFR 90.112 - Requirement of certification-adjustable parameters.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Requirement of certification-adjustable parameters. 90.112 Section 90.112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... KILOWATTS Emission Standards and Certification Provisions § 90.112 Requirement of certification—adjustable...

  3. 40 CFR 90.112 - Requirement of certification-adjustable parameters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Requirement of certification-adjustable parameters. 90.112 Section 90.112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... KILOWATTS Emission Standards and Certification Provisions § 90.112 Requirement of certification—adjustable...

  4. Parameter extraction and transistor models

    NASA Technical Reports Server (NTRS)

    Rykken, Charles; Meiser, Verena; Turner, Greg; Wang, QI

    1985-01-01

    Using specified mathematical models of the MOSFET device, the optimal values of the model-dependent parameters were extracted from data provided by the Jet Propulsion Laboratory (JPL). Three MOSFET models, all one-dimensional were used. One of the models took into account diffusion (as well as convection) currents. The sensitivity of the models was assessed for variations of the parameters from their optimal values. Lines of future inquiry are suggested on the basis of the behavior of the devices, of the limitations of the proposed models, and of the complexity of the required numerical investigations.

  5. Photovoltaic module parameters acquisition model

    NASA Astrophysics Data System (ADS)

    Cibira, Gabriel; Koščová, Marcela

    2014-09-01

    This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I-V and P-V characteristics for PV module based on equivalent electrical circuit. Then, limited I-V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.

  6. Testing a theoretical model for examining the relationship between family adjustment and expatriates' work adjustment.

    PubMed

    Caligiuri, P M; Hyland, M M; Joshi, A; Bross, A S

    1998-08-01

    Based on theoretical perspectives from the work/family literature, this study tested a model for examining expatriate families' adjustment while on global assignments as an antecedent to expatriates' adjustment to working in a host country. Data were collected from 110 families that had been relocated for global assignments. Longitudinal data, assessing family characteristics before the assignment and cross-cultural adjustment approximately 6 months into the assignment, were coded. This study found that family characteristics (family support, family communication, family adaptability) were related to expatriates' adjustment to working in the host country. As hypothesized, the families' cross-cultural adjustment mediated the effect of family characteristics on expatriates' host-country work adjustment.

  7. Effect of Flux Adjustments on Temperature Variability in Climate Models

    SciTech Connect

    Duffy, P.; Bell, J.; Covey, C.; Sloan, L.

    1999-12-27

    It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.

  8. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing

  9. Effect of flux adjustments on temperature variability in climate models

    NASA Astrophysics Data System (ADS)

    CMIP investigators; Duffy, P. B.; Bell, J.; Covey, C.; Sloan, L.

    2000-03-01

    It has been suggested that “flux adjustments” in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.

  10. Identification of driver model parameters.

    PubMed

    Reński, A

    2001-01-01

    The paper presents a driver model, which can be used in a computer simulation of a curved ride of a car. The identification of the driver parameters consisted in a comparison of the results of computer calculations obtained for the driver-vehicle-environment model with different driver data sets with test results of the double lane-change manoeuvre (Standard No. ISO/TR 3888:1975, International Organization for Standardization [ISO], 1975) and the wind gust manoeuvre. The optimisation method allows to choose for each real driver a set of driver model parameters for which the differences between test and calculation results are smallest. The presented driver model can be used in investigating the driver-vehicle control system, which allows to adapt the car construction to the psychophysical characteristics of a driver.

  11. Parameter estimation for transformer modeling

    NASA Astrophysics Data System (ADS)

    Cho, Sung Don

    Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, lambda-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients

  12. Reductions in particulate and NO(x) emissions by diesel engine parameter adjustments with HVO fuel.

    PubMed

    Happonen, Matti; Heikkilä, Juha; Murtonen, Timo; Lehto, Kalle; Sarjovaara, Teemu; Larmi, Martti; Keskinen, Jorma; Virtanen, Annele

    2012-06-05

    Hydrotreated vegetable oil (HVO) diesel fuel is a promising biofuel candidate that can complement or substitute traditional diesel fuel in engines. It has been already reported that by changing the fuel from conventional EN590 diesel to HVO decreases exhaust emissions. However, as the fuels have certain chemical and physical differences, it is clear that the full advantage of HVO cannot be realized unless the engine is optimized for the new fuel. In this article, we studied how much exhaust emissions can be reduced by adjusting engine parameters for HVO. The results indicate that, with all the studied loads (50%, 75%, and 100%), particulate mass and NO(x) can both be reduced over 25% by engine parameter adjustments. Further, the emission reduction was even higher when the target for adjusting engine parameters was to exclusively reduce either particulates or NO(x). In addition to particulate mass, different indicators of particulate emissions were also compared. These indicators included filter smoke number (FSN), total particle number, total particle surface area, and geometric mean diameter of the emitted particle size distribution. As a result of this comparison, a linear correlation between FSN and total particulate surface area at low FSN region was found.

  13. Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry.

    PubMed

    Meyer, Andrew J; Patten, Carolynn; Fregly, Benjamin J

    2017-01-01

    Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient's lower extremity muscle excitations contribute to the patient's lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient's musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with

  14. Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry

    PubMed Central

    Meyer, Andrew J.; Patten, Carolynn

    2017-01-01

    Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that

  15. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  16. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    ERIC Educational Resources Information Center

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  17. Deformation measurement using digital image correlation by adaptively adjusting the parameters

    NASA Astrophysics Data System (ADS)

    Zhao, Jian

    2016-12-01

    As a contactless full-field displacement and strain measurement technique, two-dimensional digital image correlation (DIC) has been increasingly employed to reconstruct in-plane deformation in the field of experimental mechanics. In practical application, it has been demonstrated that the selection of subset size and search zone size exerts a critical influence on measurement results of DIC, especially when decorrelation occurs between the reference image and the deformed image due to large deformation over the search zone involved. Correlation coefficient is an important parameter in DIC, and it also makes the most direct connection between subset size and search zone. A self-adaptive correlation parameter adjustment method based on correlation coefficient threshold to realize measurement efficiently by adjusting the size of the subset and search zone in a self-adaptive approach is proposed. The feasibility and effectiveness of the proposed method are verified through a set of experiments, which indicates that the presented algorithm is able to significantly reduce the cumbersome trial calculation as compared with the traditional DIC, in which the initial correlation parameters needed to be manually selected in advance based on practical experience.

  18. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    PubMed

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  19. Balloon Thermal Model Design Parameters and Sensitivities

    NASA Technical Reports Server (NTRS)

    Ferguson, Douglas

    2017-01-01

    This presentation describes the thought process for determining balloon thermal model design parameters, including environmental parameters taken form NASA's top-of-atmosphere (TOA) database, and shows the sensitivity of an example model's key temperature results to those input parameters.

  20. Covariate-Adjusted Linear Mixed Effects Model with an Application to Longitudinal Data

    PubMed Central

    Nguyen, Danh V.; Şentürk, Damla; Carroll, Raymond J.

    2009-01-01

    Linear mixed effects (LME) models are useful for longitudinal data/repeated measurements. We propose a new class of covariate-adjusted LME models for longitudinal data that nonparametrically adjusts for a normalizing covariate. The proposed approach involves fitting a parametric LME model to the data after adjusting for the nonparametric effects of a baseline confounding covariate. In particular, the effect of the observable covariate on the response and predictors of the LME model is modeled nonparametrically via smooth unknown functions. In addition to covariate-adjusted estimation of fixed/population parameters and random effects, an estimation procedure for the variance components is also developed. Numerical properties of the proposed estimators are investigated with simulation studies. The consistency and convergence rates of the proposed estimators are also established. An application to a longitudinal data set on calcium absorption, accounting for baseline distortion from body mass index, illustrates the proposed methodology. PMID:19266053

  1. Parametric estimation of quality adjusted lifetime (QAL) distribution in progressive illness--death model.

    PubMed

    Pradhan, Biswabrata; Dewanji, Anup

    2009-07-10

    In this work, we consider the parametric estimation of quality adjusted lifetime (QAL) distribution in progressive illness-death models. The main idea of this paper is to derive the theoretical distribution of QAL for the progressive illness-death models, under parametric models for the sojourn time distributions in different states, and then replace the model parameters by their estimates obtained by standard techniques of survival analysis. The method of estimation of the model parameters is also described. A data set of IBCSG Trial V has been analyzed for illustration. Extension to more general illness-death models is also discussed.

  2. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  3. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  4. Shape adjustment of cable mesh reflector antennas considering modeling uncertainties

    NASA Astrophysics Data System (ADS)

    Du, Jingli; Bao, Hong; Cui, Chuanzhen

    2014-04-01

    Cable mesh antennas are the most important implement to construct large space antennas nowadays. Reflector surface of cable mesh antennas has to be carefully adjusted to achieve required accuracy, which is an effective way to compensate manufacturing and assembly errors or other imperfections. In this paper shape adjustment of cable mesh antennas is addressed. The required displacement of the reflector surface is determined with respect to a modified paraboloid whose axial vertex offset is also considered as a variable. Then the adjustment problem is solved by minimizing the RMS error with respect to the desired paraboloid using minimal norm least squares method. To deal with the modeling uncertainties, the adjustment is achieved by solving a simple worst-case optimization problem instead of directly using the least squares method. A numerical example demonstrates the worst-case method is of good convergence and accuracy, and is robust to perturbations.

  5. Initial experience in operation of furnace burners with adjustable flame parameters

    SciTech Connect

    Garzanov, A.L.; Dolmatov, V.L.; Saifullin, N.R.

    1995-07-01

    The designs of burners currently used in tube furnaces (CP, FGM, GMG, GIK, GNF, etc.) do not have any provision for adjusting the heat-transfer characteristics of the flame, since the gas and air feed systems in these burners do not allow any variation of the parameters of mixture formation, even though this process is critical in determining the length, shape, and luminosity of the flame and also the furnace operating conditions: efficiency, excess air coefficient, flue gas temperature at the bridgewall, and other indexes. In order to provide the controlling the heat-transfer characteristics of the flame, the Elektrogorsk Scientific-Research Center (ENITs), on the assignment of the Novo-Ufa Petroleum Refinery, developed a burner with diffusion regulation of the flame. The gas nozzle of the burner is made up of two coaxial gas chambers 1 and 2, with independent feed of gas from a common line through two supply lines.

  6. Study on Optimization of Electromagnetic Relay's Reaction Torque Characteristics Based on Adjusted Parameters

    NASA Astrophysics Data System (ADS)

    Zhai, Guofu; Wang, Qiya; Ren, Wanbin

    The cooperative characteristics of electromagnetic relay's attraction torque and reaction torque are the key property to ensure its reliability, and it is important to attain better cooperative characteristics by analyzing and optimizing relay's electromagnetic system and mechanical system. From the standpoint of changing reaction torque of mechanical system, in this paper, adjusted parameters (armature's maximum angular displacement αarm_max, initial return spring's force Finiti_return_spring, normally closed (NC) contacts' force FNC_contacts, contacts' gap δgap, and normally opened (NO) contacts' over travel δNO_contacts) were adopted as design variables, and objective function was provided for with the purpose of increasing breaking velocities of both NC contacts and NO contacts. Finally, genetic algorithm (GA) was used to attain optimization of the objective function. Accuracy of calculation for the relay's dynamic characteristics was verified by experiment.

  7. Parenting Stress, Mental Health, Dyadic Adjustment: A Structural Equation Model

    PubMed Central

    Rollè, Luca; Prino, Laura E.; Sechi, Cristina; Vismara, Laura; Neri, Erica; Polizzi, Concetta; Trovato, Annamaria; Volpi, Barbara; Molgora, Sara; Fenaroli, Valentina; Ierardi, Elena; Ferro, Valentino; Lucarelli, Loredana; Agostini, Francesca; Tambelli, Renata; Saita, Emanuela; Riva Crugnola, Cristina; Brustia, Piera

    2017-01-01

    Objective: In the 1st year of the post-partum period, parenting stress, mental health, and dyadic adjustment are important for the wellbeing of both parents and the child. However, there are few studies that analyze the relationship among these three dimensions. The aim of this study is to investigate the relationships between parenting stress, mental health (depressive and anxiety symptoms), and dyadic adjustment among first-time parents. Method: We studied 268 parents (134 couples) of healthy babies. At 12 months post-partum, both parents filled out, in a counterbalanced order, the Parenting Stress Index-Short Form, the Edinburgh Post-natal Depression Scale, the State-Trait Anxiety Inventory, and the Dyadic Adjustment Scale. Structural equation modeling was used to analyze the potential mediating effects of mental health on the relationship between parenting stress and dyadic adjustment. Results: Results showed the full mediation effect of mental health between parenting stress and dyadic adjustment. A multi-group analysis further found that the paths did not differ across mothers and fathers. Discussion: The results suggest that mental health is an important dimension that mediates the relationship between parenting stress and dyadic adjustment in the transition to parenthood. PMID:28588541

  8. Illumination-parameter adjustable and illumination-distribution visible LED helmet for low-level light therapy on brain injury

    NASA Astrophysics Data System (ADS)

    Wang, Pengbo; Gao, Yuan; Chen, Xiao; Li, Ting

    2016-03-01

    Low-level light therapy (LLLT) has been clinically applied. Recently, more and more cases are reported with positive therapeutic effect by using transcranial light emitting diodes (LEDs) illumination. Here, we developed a LLLT helmet for treating brain injuries based on LED arrays. We designed the LED arrays in circle shape and assembled them in multilayered 3D printed helmet with water-cooling module. The LED arrays can be adjust to touch the head of subjects. A control circuit was developed to drive and control the illumination of the LLLT helmet. The software portion provides the control of on and off of each LED arrays, the setup of illumination parameters, and 3D distribution of LLLT light dose in human subject according to the illumination setups. This LLLT light dose distribution was computed by a Monte Carlo model for voxelized media and the Visible Chinese Human head dataset and displayed in 3D view at the background of head anatomical structure. The performance of the whole system was fully tested. One stroke patient was recruited in the preliminary LLLT experiment and the following neuropsychological testing showed obvious improvement in memory and executive functioning. This clinical case suggested the potential of this Illumination-parameter adjustable and illuminationdistribution visible LED helmet as a reliable, noninvasive, and effective tool in treating brain injuries.

  9. Parameter estimation for distributed parameter models of complex, flexible structures

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.

    1991-01-01

    Distributed parameter modeling of structural dynamics has been limited to simple spacecraft configurations because of the difficulty of handling several distributed parameter systems linked at their boundaries. Although there is other computer software able to generate such models or complex, flexible spacecraft, unfortunately, neither is suitable for parameter estimation. Because of this limitation the computer software PDEMOD is being developed for the express purposes of modeling, control system analysis, parameter estimation and structure optimization. PDEMOD is capable of modeling complex, flexible spacecraft which consist of a three-dimensional network of flexible beams and rigid bodies. Each beam has bending (Bernoulli-Euler or Timoshenko) in two directions, torsion, and elongation degrees of freedom. The rigid bodies can be attached to the beam ends at any angle or body location. PDEMOD is also capable of performing parameter estimation based on matching experimental modal frequencies and static deflection test data. The underlying formulation and the results of using this approach for test data of the Mini-MAST truss will be discussed. The resulting accuracy of the parameter estimates when using such limited data can impact significantly the instrumentation requirements for on-orbit tests.

  10. Ascertainment-adjusted parameter estimation approach to improve robustness against misspecification of health monitoring methods

    NASA Astrophysics Data System (ADS)

    Juesas, P.; Ramasso, E.

    2016-12-01

    Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.

  11. [Parameters with possible influence in PSA adjusted for transition zone volume].

    PubMed

    Jara Rascón, J; Subirá Ríos, D; Lledó Garcia, E; Martínez Salamanca, J I; Moncada Iribarren, I; Cabello Benavente, R; Hernández Fernández, C

    2005-05-01

    To evaluate the effect of age, digital rectal examination results and prostatic volume on PSA value adjusted to transition zone (PSA-TZ) in the detection of prostatic cancer. Data of 243 patients with serum PSA of 4 to 20 ng/ml who underwent biopsy because of prostatic cancer suspicion are analyzed. In this population, cancer was detected in 62 cases (24.8%). Total prostatic volume and transition zone volume were calculated by transrectal echography applying the ellipsoid formula. Applying lineal regresion analysis, it was found no correlation between age and PSA-TZ (Pearson coefficient 0.00). By dividing these patients among those with normal rectal examination (84%) and those with suspicious digital rectal examination (16%), cutoff values of PSA-TZ were found to be not different by ROC curves analysis for 95% sensitivity varying specificity only among 24 and 26% between these two groups of patients. Prostatic size (< or = or >40 cc) showed that, for obtaining the same 95% sensitivity in the detection of cancer, PSA-TZ value would require to be modified, being 0.17 in large prostates (> 40 cc) and 0.25 in small prostates (< or =40 cc). The utility of PSA-TZ as a potential predictor parameter of prostatic cancer did not need to be modified with respect to age or to data of digital rectal examination. However, for supporting sensivity of its best cutoff value, PSA-TZ would need to be modified with respect to total prostatic volume.

  12. Parameter Identifiability of Fundamental Pharmacodynamic Models

    PubMed Central

    Janzén, David L. I.; Bergenholm, Linnéa; Jirstrand, Mats; Parkinson, Joanna; Yates, James; Evans, Neil D.; Chappell, Michael J.

    2016-01-01

    Issues of parameter identifiability of routinely used pharmacodynamics models are considered in this paper. The structural identifiability of 16 commonly applied pharmacodynamic model structures was analyzed analytically, using the input-output approach. Both fixed-effects versions (non-population, no between-subject variability) and mixed-effects versions (population, including between-subject variability) of each model structure were analyzed. All models were found to be structurally globally identifiable under conditions of fixing either one of two particular parameters. Furthermore, an example was constructed to illustrate the importance of sufficient data quality and show that structural identifiability is a prerequisite, but not a guarantee, for successful parameter estimation and practical parameter identifiability. This analysis was performed by generating artificial data of varying quality to a structurally identifiable model with known true parameter values, followed by re-estimation of the parameter values. In addition, to show the benefit of including structural identifiability as part of model development, a case study was performed applying an unidentifiable model to real experimental data. This case study shows how performing such an analysis prior to parameter estimation can improve the parameter estimation process and model performance. Finally, an unidentifiable model was fitted to simulated data using multiple initial parameter values, resulting in highly different estimated uncertainties. This example shows that although the standard errors of the parameter estimates often indicate a structural identifiability issue, reasonably “good” standard errors may sometimes mask unidentifiability issues. PMID:27994553

  13. Age of dam and sex of calf adjustments and genetic parameters for gestation length in Charolais cattle.

    PubMed

    Crews, D H

    2006-01-01

    To estimate adjustment factors and genetic parameters for gestation length (GES), AI and calving date records (n = 40,356) were extracted from the Canadian Charolais Association field database. The average time from AI to calving date was 285.2 d (SD = 4.49 d) and ranged from 274 to 296 d. Fixed effects were sex of calf, age of dam (2, 3, 4, 5 to 10, > or = 11 yr), and gestation contemporary group (year of birth x herd of origin). Variance components were estimated using REML and 4 animal models (n = 84,332) containing from 0 to 3 random maternal effects. Model 1 (M1) contained only direct genetic effects. Model 2 (M2) was G1 plus maternal genetic effects with the direct x maternal genetic covariance constrained to zero, and model 3 (M3) was G2 without the covariance constraint. Model 4 (M4) extended G3 to include a random maternal permanent environmental effect. Direct heritability estimates were high and similar among all models (0.61 to 0.64), and maternal heritability estimates were low, ranging from 0.01 (M2) to 0.09 (M3). Likelihood ratio tests and parameter estimates suggested that M4 was the most appropriate (P < 0.05) model. With M4, phenotypic variance (18.35 d2) was partitioned into direct and maternal genetic, and maternal permanent environmental components (hd2 = 0.64 +/- 0.04, hm2 = 0.07 +/- 0.01, r(d,m) = -0.37 +/- 0.06, and c2 = 0.03 +/- 0.01, respectively). Linear contrasts were used to estimate that bull calves gestated 1.26 d longer (P < 0.02) than heifers, and adjustments to a mature equivalent (5 to 10 yr old) age of dam were 1.49 (P < 0.01), 0.56 (P < 0.01), 0.33 (P < 0.01), and -0.24 (P < 0.14) d for GES records of calves born to 2-, 3-, 4-, and > or = 11-yr-old cows, respectively. Bivariate animal models were used to estimate genetic parameters for GES with birth and adjusted 205-d weaning weights, and postweaning gain. Direct GES was positively correlated with direct birth weight (BWT; 0.34 +/- 0.04) but negatively correlated with maternal

  14. A simple approach to adjust tidal forcing in fjord models

    NASA Astrophysics Data System (ADS)

    Hjelmervik, Karina; Kristensen, Nils Melsom; Staalstrøm, André; Røed, Lars Petter

    2017-07-01

    To model currents in a fjord accurate tidal forcing is of extreme importance. Due to complex topography with narrow and shallow straits, the tides in the innermost parts of a fjord are both shifted in phase and altered in amplitude compared to the tides in the open water outside the fjord. Commonly, coastal tide information extracted from global or regional models is used on the boundary of the fjord model. Since tides vary over short distances in shallower waters close to the coast, the global and regional tidal forcings are usually too coarse to achieve sufficiently accurate tides in fjords. We present a straightforward method to remedy this problem by simply adjusting the tides to fit the observed tides at the entrance of the fjord. To evaluate the method, we present results from the Oslofjord, Norway. A model for the fjord is first run using raw tidal forcing on its open boundary. By comparing modelled and observed time series of water level at a tidal gauge station close to the open boundary of the model, a factor for the amplitude and a shift in phase are computed. The amplitude factor and the phase shift are then applied to produce adjusted tidal forcing at the open boundary. Next, we rerun the fjord model using the adjusted tidal forcing. The results from the two runs are then compared to independent observations inside the fjord in terms of amplitude and phases of the various tidal components, the total tidal water level, and the depth integrated tidal currents. The results show improvements in the modelled tides in both the outer, and more importantly, the inner parts of the fjord.

  15. Modeling wind adjustment factor and midflame wind speed for Rothermel's surface fire spread model

    Treesearch

    Patricia L. Andrews

    2012-01-01

    Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...

  16. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  17. Adjustment in mothers of children with Asperger syndrome: an application of the double ABCX model of family adjustment.

    PubMed

    Pakenham, Kenneth I; Samios, Christina; Sofronoff, Kate

    2005-05-01

    The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between 10 and 12 years. Results of correlations showed that each of the model components was related to one or more domains of maternal adjustment in the direction predicted, with the exception of problem-focused coping. Hierarchical regression analyses demonstrated that, after controlling for the effects of relevant demographics, stressor severity, pile-up of demands and coping were related to adjustment. Findings indicate the utility of the double ABCX model in guiding research into parental adjustment when caring for a child with Asperger syndrome. Limitations of the study and clinical implications are discussed.

  18. Understanding Parameter Invariance in Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Rupp, Andre A.; Zumbo, Bruno D.

    2006-01-01

    One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…

  19. Coercively Adjusted Auto Regression Model for Forecasting in Epilepsy EEG

    PubMed Central

    Kim, Sun-Hee; Faloutsos, Christos; Yang, Hyung-Jeong

    2013-01-01

    Recently, data with complex characteristics such as epilepsy electroencephalography (EEG) time series has emerged. Epilepsy EEG data has special characteristics including nonlinearity, nonnormality, and nonperiodicity. Therefore, it is important to find a suitable forecasting method that covers these special characteristics. In this paper, we propose a coercively adjusted autoregression (CA-AR) method that forecasts future values from a multivariable epilepsy EEG time series. We use the technique of random coefficients, which forcefully adjusts the coefficients with −1 and 1. The fractal dimension is used to determine the order of the CA-AR model. We applied the CA-AR method reflecting special characteristics of data to forecast the future value of epilepsy EEG data. Experimental results show that when compared to previous methods, the proposed method can forecast faster and accurately. PMID:23710252

  20. End-point parameter adjustment on a small desk-top programmable calculator for logit-log analysis of radioimmunoassay data.

    PubMed

    Hatch, K F; Coles, E; Busey, H; Goldman, S C

    1976-08-01

    We describe an improved method of logit-log curve fitting, by adjusting end-point parameters in radioimmunoassay studies, for use with a small desk-top programmable calculator. Straight logit-log analyses are often deficient because of their high sensitivity to small errors in the end-point parametes B0 and NSB (the actual measured activity in the tubes). The literature suggests techniques for adjusting these end-point parameters, but they require too much computing time and programming space to be used with a desk-top programmable calculator. The extension to the logit-log model presented here is easily handled by the programmable calculator and provides a good estimate of the change required in B0 and NSB to obtain a better fit. The program requires 1.5 min to run on our desk-top programmable calculator, and has resulted in improved data analysis for all of the 11 types of radioimmunoassay studied.

  1. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion

    PubMed Central

    Huang, Lam O.; Infante-Rivard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies. PMID:27630667

  2. Screening parameters for the relativistic hydrogenic model

    NASA Astrophysics Data System (ADS)

    Lanzini, Fernando; Di Rocco, Héctor O.

    2015-12-01

    We present a Relativistic Screened Hydrogenic Model (RSHM) where the screening parameters depend on the variables (n , l , j) and the parameters (Z , N) . These screening parameters were derived theoretically in a neat form with no use of experimental values nor numerical values from self-consistent codes. The results of the model compare favorably with those obtained by using more sophisticated approaches. For the interested reader, a copy of our code can be requested from the corresponding author.

  3. Study of dual wavelength composite output of solid state laser based on adjustment of resonator parameters

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Nie, Jinsong; Wang, Xi; Hu, Yuze

    2016-10-01

    The 1064nm fundamental wave (FW) and the 532nm second harmonic wave (SHW) of Nd:YAG laser have been widely applied in many fields. In some military applications requiring interference in both visible and near-infrared spectrum range, the de-identification interference technology based on the dual wavelength composite output of FW and SHW offers an effective way of making the device or equipment miniaturized and low cost. In this paper, the application of 1064nm and 532nm dual-wavelength composite output technology in military electro-optical countermeasure is studied. A certain resonator configuration that can achieve composite laser output with high power, high beam quality and high repetition rate is proposed. Considering the thermal lens effect, the stability of this certain resonator is analyzed based on the theory of cavity transfer matrix. It shows that with the increase of thermal effect, the intracavity fundamental mode volume decreased, resulting the peak fluctuation of cavity stability parameter. To explore the impact the resonator parameters does to characteristics and output ratio of composite laser, the solid-state laser's dual-wavelength composite output models in both continuous and pulsed condition are established by theory of steady state equation and rate equation. Throughout theoretical simulation and analysis, the optimal KTP length and best FW transmissivity are obtained. The experiment is then carried out to verify the correctness of theoretical calculation result.

  4. On Interpreting the Model Parameters for the Three Parameter Logistic Model

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    This paper addresses two problems relating to the interpretability of the model parameters in the three parameter logistic model. First, it is shown that if the values of the discrimination parameters are all the same, the remaining parameters are nonidentifiable in a nontrivial way that involves not only ability and item difficulty, but also the…

  5. On Interpreting the Model Parameters for the Three Parameter Logistic Model

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    This paper addresses two problems relating to the interpretability of the model parameters in the three parameter logistic model. First, it is shown that if the values of the discrimination parameters are all the same, the remaining parameters are nonidentifiable in a nontrivial way that involves not only ability and item difficulty, but also the…

  6. Variable deceleration parameter and dark energy models

    NASA Astrophysics Data System (ADS)

    Bishi, Binaya K.

    2016-03-01

    This paper deals with the Bianchi type-III dark energy model and equation of state parameter in a first class of f(R,T) gravity. Here, R and T represents the Ricci scalar and trace of the energy momentum tensor, respectively. The exact solutions of the modified field equations are obtained by using (i) linear relation between expansion scalar and shear scalar, (ii) linear relation between state parameter and skewness parameter and (iii) variable deceleration parameter. To obtain the physically plausible cosmological models, the variable deceleration parameter with the suitable substitution leads to the scale factor of the form a(t) = [sinh(αt)] 1 n, where α and n > 0 are arbitrary constants. It is observed that our models are accelerating for 0 < n < 1 and for n > 1, transition phase from deceleration to acceleration. Further, we have discussed physical properties of the models.

  7. Adjustable box-wing model for solar radiation pressure impacting GPS satellites

    NASA Astrophysics Data System (ADS)

    Rodriguez-Solano, C. J.; Hugentobler, U.; Steigenberger, P.

    2012-04-01

    One of the major uncertainty sources affecting Global Positioning System (GPS) satellite orbits is the direct solar radiation pressure. In this paper a new model for the solar radiation pressure on GPS satellites is presented that is based on a box-wing satellite model, and assumes nominal attitude. The box-wing model is based on the physical interaction between solar radiation and satellite surfaces, and can be adjusted to fit the GPS tracking data. To compensate the effects of solar radiation pressure, the International GNSS Service (IGS) analysis centers employ a variety of approaches, ranging from purely empirical models based on in-orbit behavior, to physical models based on pre-launch spacecraft structural analysis. It has been demonstrated, however, that the physical models fail to predict the real orbit behavior with sufficient accuracy, mainly due to deviations from nominal attitude, inaccurately known optical properties, or aging of the satellite surfaces. The adjustable box-wing model presented in this paper is an intermediate approach between the physical/analytical models and the empirical models. The box-wing model fits the tracking data by adjusting mainly the optical properties of the satellite's surfaces. In addition, the so called Y-bias and a parameter related to a rotation lag angle of the solar panels around their rotation axis (about 1.5° for Block II/IIA and 0.5° for Block IIR) are estimated. This last parameter, not previously identified for GPS satellites, is a key factor for precise orbit determination. For this study GPS orbits are generated based on one year (2007) of tracking data, with the processing scheme derived from the Center for Orbit Determination in Europe (CODE). Two solutions are computed, one using the adjustable box-wing model and one using the CODE empirical model. Using this year of data the estimated parameters and orbits are analyzed. The performance of the models is comparable, when looking at orbit overlap and orbit

  8. Application and Performance Analysis of a New Bundle Adjustment Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Liu, X.; Chen, R.; Wan, J.; Wang, Q.; Wang, H.; Li, Y.; Yan, L.

    2017-09-01

    As the basis for photogrammetry, Bundle Adjustment (BA) can restore the pose of cameras accurately, reconstruct the 3D models of environment, and serve as the criterion of digital production. For the classical nonlinear optimization of BA model based on the Euclidean coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a global minimum. This paper first introduces a new BA model based on parallax angle feature parametrization, and then analyses the applications and performance of the model used in photogrammetry field. To discuss the impact and the performance of the model (especially in aerial photogrammetry), experiments using two aerial datasets under different initial values were conducted. The experiment results are better than some well-known software packages of BA, and the simulation results illustrate the stability of the new model than the normal BA under the Euclidean coordinate. In all, the new BA model shows promising applications in faster and more efficient aerial photogrammetry with good convergence and fast convergence speed.

  9. Risk-adjusted models for adverse obstetric outcomes and variation in risk-adjusted outcomes across hospitals.

    PubMed

    Bailit, Jennifer L; Grobman, William A; Rice, Madeline Murguia; Spong, Catherine Y; Wapner, Ronald J; Varner, Michael W; Thorp, John M; Leveno, Kenneth J; Caritis, Steve N; Shubert, Phillip J; Tita, Alan T; Saade, George; Sorokin, Yoram; Rouse, Dwight J; Blackwell, Sean C; Tolosa, Jorge E; Van Dorsten, J Peter

    2013-11-01

    Regulatory bodies and insurers evaluate hospital quality using obstetrical outcomes, however meaningful comparisons should take preexisting patient characteristics into account. Furthermore, if risk-adjusted outcomes are consistent within a hospital, fewer measures and resources would be needed to assess obstetrical quality. Our objective was to establish risk-adjusted models for 5 obstetric outcomes and assess hospital performance across these outcomes. We studied a cohort of 115,502 women and their neonates born in 25 hospitals in the United States from March 2008 through February 2011. Hospitals were ranked according to their unadjusted and risk-adjusted frequency of venous thromboembolism, postpartum hemorrhage, peripartum infection, severe perineal laceration, and a composite neonatal adverse outcome. Correlations between hospital risk-adjusted outcome frequencies were assessed. Venous thromboembolism occurred too infrequently (0.03%; 95% confidence interval [CI], 0.02-0.04%) for meaningful assessment. Other outcomes occurred frequently enough for assessment (postpartum hemorrhage, 2.29%; 95% CI, 2.20-2.38, peripartum infection, 5.06%; 95% CI, 4.93-5.19, severe perineal laceration at spontaneous vaginal delivery, 2.16%; 95% CI, 2.06-2.27, neonatal composite, 2.73%; 95% CI, 2.63-2.84). Although there was high concordance between unadjusted and adjusted hospital rankings, several individual hospitals had an adjusted rank that was substantially different (as much as 12 rank tiers) than their unadjusted rank. None of the correlations between hospital-adjusted outcome frequencies was significant. For example, the hospital with the lowest adjusted frequency of peripartum infection had the highest adjusted frequency of severe perineal laceration. Evaluations based on a single risk-adjusted outcome cannot be generalized to overall hospital obstetric performance. Copyright © 2013 Mosby, Inc. All rights reserved.

  10. On parameter estimation in population models.

    PubMed

    Ross, J V; Taimre, T; Pollett, P K

    2006-12-01

    We describe methods for estimating the parameters of Markovian population processes in continuous time, thus increasing their utility in modelling real biological systems. A general approach, applicable to any finite-state continuous-time Markovian model, is presented, and this is specialised to a computationally more efficient method applicable to a class of models called density-dependent Markov population processes. We illustrate the versatility of both approaches by estimating the parameters of the stochastic SIS logistic model from simulated data. This model is also fitted to data from a population of Bay checkerspot butterfly (Euphydryas editha bayensis), allowing us to assess the viability of this population.

  11. Exploiting intrinsic fluctuations to identify model parameters.

    PubMed

    Zimmer, Christoph; Sahle, Sven; Pahle, Jürgen

    2015-04-01

    Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non-identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non-identifiable. The authors present a method to identify model parameters that are structurally non-identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one-dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system's behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration-death, gene expression and Epo-EpoReceptor interaction, that this resolves the non-identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.

  12. Constructing stochastic models from deterministic process equations by propensity adjustment

    PubMed Central

    2011-01-01

    Background Gillespie's stochastic simulation algorithm (SSA) for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME) in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic model for a biochemical

  13. Modeling pattern in collections of parameters

    USGS Publications Warehouse

    Link, W.A.

    1999-01-01

    Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.

  14. Recursive parameter estimation of hydrologic models

    NASA Astrophysics Data System (ADS)

    Rajaram, Harihar; Georgakakos, Konstantine P.

    1989-02-01

    Proposed is a nonlinear filtering approach to recursive parameter estimation of conceptual watershed response models in state-space form. The conceptual model state is augmented by the vector of free parameters which are to be estimated from input-output data, and the extended Kaiman filter is used to recursively estimate and predict the augmented state. The augmented model noise covariance is parameterized as the sum of two components: one due to errors in the augmented model input and another due to errors in the specification of augmented model constants that were estimated from other than input-output data (e.g., topographic and rating curve constants). These components depend on the sensitivity of the augmented model to input and uncertain constants. Such a novel parameterization allows for nonstationary model noise statistics that are consistent with the dynamics of watershed response as they are described by the conceptual watershed response model. Prior information regarding uncertainty in input and uncertain constants in the form of degree-of-belief estimates of hydrologists can be used directly within the proposed formulation. Even though model structure errors are not explicitly parameterized in the present formulation, such errors can be identified through the examination of the one-step ahead predicted normalized residuals and the parameter traces during convergence. The formulation is exemplified by the estimation of the parameters of a conceptual hydrologic model with data from the 2.1-km2 watershed of Woods Lake located in the Adirondack Mountains of New York.

  15. Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment

    NASA Astrophysics Data System (ADS)

    Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.

    2016-08-01

    This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.

  16. Parameter estimation for groundwater models under uncertain irrigation data

    USGS Publications Warehouse

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  17. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  18. Adjustment of design parameters for improved feedback properties in the linear quadratic regulator

    NASA Astrophysics Data System (ADS)

    McEwen, R. S.

    1982-03-01

    Classical analysis and design methods for single input-single output (SISO) systems, such as gain and phase margins, do not generalize easily to MIMO systems. Recently, the singular values of the return difference and inverse Nyquist matrices have proven useful in analyzing multiple input-multiple output (MIMO) systems. The linear quadratic formulation is useful for the design of MIMO controllers. A disadvantage of this design method is that all the design specifications must be incorporated into a quadratic cost functional. This thesis contains a systematic method for adjusting the quadratic cost to manipulate the singular value functionals and the feedback properties and thus achieve the design requirements.

  19. Mathematical modeling on experimental protocol of glucose adjustment for non-invasive blood glucose sensing

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Min, Xiaolin; Zou, Da; Xu, Kexin

    2012-03-01

    Currently, blood glucose concentration levels from OGTT(Oral Glucose Tolerance Test) results are used to build PLS model in noninvasive blood glucose sensing by Near-Infrared(NIR) Spectroscopy. However, the univocal dynamic change trend of blood glucose concentration based on OGTT results is not various enough to provide comprehensive data to make PLS model robust and accurate. In this talk, with the final purpose of improving the stability and accuracy of the PLS model, we introduced an integrated minimal model(IMM) of glucose metabolism system. First, by adjusting parameters, which represent different metabolism characteristics and individual differences, comparatively ideal mediation programs to different groups of people, even individuals were customized. Second, with different glucose input types(oral method, intravenous injection, or intravenous drip), we got various changes of blood glucose concentration. And by studying the adjustment methods of blood glucose concentration, we would thus customize corresponding experimental protocols of glucose adjustment to different people for noninvasive blood glucose concentration and supply comprehensive data for PLS model.

  20. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  1. Systematic parameter inference in stochastic mesoscopic modeling

    NASA Astrophysics Data System (ADS)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  2. Adjusting power for a baseline covariate in linear models

    PubMed Central

    Glueck, Deborah H.; Muller, Keith E.

    2009-01-01

    SUMMARY The analysis of covariance provides a common approach to adjusting for a baseline covariate in medical research. With Gaussian errors, adding random covariates does not change either the theory or the computations of general linear model data analysis. However, adding random covariates does change the theory and computation of power analysis. Many data analysts fail to fully account for this complication in planning a study. We present our results in five parts. (i) A review of published results helps document the importance of the problem and the limitations of available methods. (ii) A taxonomy for general linear multivariate models and hypotheses allows identifying a particular problem. (iii) We describe how random covariates introduce the need to consider quantiles and conditional values of power. (iv) We provide new exact and approximate methods for power analysis of a range of multivariate models with a Gaussian baseline covariate, for both small and large samples. The new results apply to the Hotelling-Lawley test and the four tests in the “univariate” approach to repeated measures (unadjusted, Huynh-Feldt, Geisser-Greenhouse, Box). The techniques allow rapid calculation and an interactive, graphical approach to sample size choice. (v) Calculating power for a clinical trial of a treatment for increasing bone density illustrates the new methods. We particularly recommend using quantile power with a new Satterthwaite-style approximation. PMID:12898543

  3. Models and parameters for environmental radiological assessments

    SciTech Connect

    Miller, C W

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  4. Dose Adjustment Strategy of Cyclosporine A in Renal Transplant Patients: Evaluation of Anthropometric Parameters for Dose Adjustment and C0 vs. C2 Monitoring in Japan, 2001-2010

    PubMed Central

    Kokuhu, Takatoshi; Fukushima, Keizo; Ushigome, Hidetaka; Yoshimura, Norio; Sugioka, Nobuyuki

    2013-01-01

    The optimal use and monitoring of cyclosporine A (CyA) have remained unclear and the current strategy of CyA treatment requires frequent dose adjustment following an empirical initial dosage adjusted for total body weight (TBW). The primary aim of this study was to evaluate age and anthropometric parameters as predictors for dose adjustment of CyA; and the secondary aim was to compare the usefulness of the concentration at predose (C0) and 2-hour postdose (C2) monitoring. An open-label, non-randomized, retrospective study was performed in 81 renal transplant patients in Japan during 2001-2010. The relationships between the area under the blood concentration-time curve (AUC0-9) of CyA and its C0 or C2 level were assessed with a linear regression analysis model. In addition to age, 7 anthropometric parameters were tested as predictors for AUC0-9 of CyA: TBW, height (HT), body mass index (BMI), body surface area (BSA), ideal body weight (IBW), lean body weight (LBW), and fat free mass (FFM). Correlations between AUC0-9 of CyA and these parameters were also analyzed with a linear regression model. The rank order of the correlation coefficient was C0 > C2 (C0; r=0.6273, C2; r=0.5562). The linear regression analyses between AUC0-9 of CyA and candidate parameters indicated their potential usefulness from the following rank order: IBW > FFM > HT > BSA > LBW > TBW > BMI > Age. In conclusion, after oral administration, C2 monitoring has a large variation and could be at high risk for overdosing. Therefore, after oral dosing of CyA, it was not considered to be a useful approach for single monitoring, but should rather be used with C0 monitoring. The regression analyses between AUC0-9 of CyA and anthropometric parameters indicated that IBW was potentially the superior predictor for dose adjustment of CyA in an empiric strategy using TBW (IBW; r=0.5181, TBW; r=0.3192); however, this finding seems to lack the pharmacokinetic rationale and thus warrants further basic and clinical

  5. An algorithm for automatic parameter adjustment for brain extraction in BrainSuite

    NASA Astrophysics Data System (ADS)

    Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.

    2017-02-01

    Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.

  6. Some two-layer models of the shelf-slope front: Geostropic adjustment and its maintenence

    SciTech Connect

    Ou, H.W.

    1983-10-01

    Two conceptual models of a two-layered frontal system are presented to study the wintertime shelf-slope front. The first model examines the geostropic adjustment over a step topography after the fall overturning and applied only over short time scales before the nonconservative processes become important. The second model, on the other hand, examines the thermodynamic balance over longer time scales when some dissipative and mixing effects are included. From the geostropic-adjustment model, it is found that the flat-bottom solution of a less-dense shelf water with respect to the slope water is little modified by the presence of a step. But in the case of denser shelf water, the solution shows the detachment of a spillage when the depth ratio across the step is greater than two-resembling some regional observations. In the frictional model, the wind generted entrainment is demonstrated to provide a virtual momentum source to maintain the along-front current shear against friction and thus can account for the persistence of the front through the winter season. The entrainment also decreases the bouyancy of the exported shelf water, the distribution of which however, varies greatly with the external parameters. For parameter values applicable to the Middle Atlantic Bight, an inflection point, corresponding to a weakened lateral bouyance gradient, is predicted above the front, consistent with observation.

  7. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard.

    PubMed

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase "RHINE WAAL UNIVERSITY" with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy).

  8. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard

    PubMed Central

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase “RHINE WAAL UNIVERSITY” with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy). PMID:26733788

  9. [Parameters adaptation in the populations models].

    PubMed

    Il'ichev, V G

    2005-01-01

    Ecology-evolutionary models of low dimensions were developed on the basis of competitive selection criteria. Dynamics of variables (number of individuals) and the search of evolutionary-stable values of parameters (biological characterictics of populations) were monitored in the suggested models. If the environmental temperature is changing periodically, the average (a) and width (d) of temperature tolerance range appears to be the important parameters. By model experiments it was established that stable values of temperature (a), favorable for development of highly specialized algae (d is low) were close to minimum and maximum of temperature curve. And for the low specialized algae (d is high) this values were close to the average temperature of environment. In a similar manner, a set of evolutionally stable parameters (a, d) was established for either of the two interacted populations (competitors and "predator-prey"). The hypotheses concerning it's geometric structure and the process of coevolution is formulated.

  10. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  11. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  12. Estimation of Model Parameters for Steerable Needles

    PubMed Central

    Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451

  13. Parameter identification and modeling of longitudinal aerodynamics

    NASA Technical Reports Server (NTRS)

    Aksteter, J. W.; Parks, E. K.; Bach, R. E., Jr.

    1995-01-01

    Using a comprehensive flight test database and a parameter identification software program produced at NASA Ames Research Center, a math model of the longitudinal aerodynamics of the Harrier aircraft was formulated. The identification program employed the equation error method using multiple linear regression to estimate the nonlinear parameters. The formulated math model structure adhered closely to aerodynamic and stability/control theory, particularly with regard to compressibility and dynamic manoeuvring. Validation was accomplished by using a three degree-of-freedom nonlinear flight simulator with pilot inputs from flight test data. The simulation models agreed quite well with the measured states. It is important to note that the flight test data used for the validation of the model was not used in the model identification.

  14. Analysis of Modeling Parameters on Threaded Screws.

    SciTech Connect

    Vigil, Miquela S.; Brake, Matthew Robert; Vangoethem, Douglas

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.

  15. Parameter Estimation of Spacecraft Fuel Slosh Model

    NASA Technical Reports Server (NTRS)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  16. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    NASA Astrophysics Data System (ADS)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  17. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

    PubMed

    Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

    2016-08-15

    Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    ERIC Educational Resources Information Center

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  19. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    ERIC Educational Resources Information Center

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  20. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    PubMed

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  1. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model

    PubMed Central

    Pande, Vijay S.; Head-Gordon, Teresa; Ponder, Jay W.

    2016-01-01

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. The protocol uses an automated procedure, ForceBalance, to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimentally obtained data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The new AMOEBA14 water model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures ranging from 249 K to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to a variety of experimental properties as a function of temperature, including the 2nd virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient and dielectric constant. The viscosity, self-diffusion constant and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2 to 20 water molecules, the AMOEBA14 model yields results similar to the AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601

  2. Nonlinear, lumped parameter transformer model reduction technique

    SciTech Connect

    Degeneff, R.C.; Gutierrez, M.R.; Vakilian, M.

    1995-04-01

    Utility engineers often need nonlinear transformer models in order to investigate power system transient events. Methods exist to create accurate wideband reduced order linear transformer models, however, to date a method of creating a reduced order wideband nonlinear transformer model has not been presented. This paper describes a technique that starts with a detailed nonlinear transformer model used for insulation design studies and reduces its order so that it can be used conveniently in EMTP. The method is based on linearization of the core`s saturable characteristic during each solution time intervals. The technique uses Kron`s reduction approach in each solution time interval. It can be applied to any nonlinear lumped parameter network which uses electric parameter analogies (i.e., FEM networks). This paper outlines the nonlinear reduction technique. An illustrative example is given using the transient voltage response during saturation for a 785/345/34.5kV, YYD 500 MVA single phase auto transformer.

  3. Dolphins Adjust Species-Specific Frequency Parameters to Compensate for Increasing Background Noise

    PubMed Central

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles’ frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise. PMID:25853825

  4. Dolphins adjust species-specific frequency parameters to compensate for increasing background noise.

    PubMed

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles' frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise.

  5. An automated approach for tone mapping operator parameter adjustment in security applications

    NASA Astrophysics Data System (ADS)

    Krasula, LukáÅ.¡; Narwaria, Manish; Le Callet, Patrick

    2014-05-01

    High Dynamic Range (HDR) imaging has been gaining popularity in recent years. Different from the traditional low dynamic range (LDR), HDR content tends to be visually more appealing and realistic as it can represent the dynamic range of the visual stimuli present in the real world. As a result, more scene details can be faithfully reproduced. As a direct consequence, the visual quality tends to improve. HDR can be also directly exploited for new applications such as video surveillance and other security tasks. Since more scene details are available in HDR, it can help in identifying/tracking visual information which otherwise might be difficult with typical LDR content due to factors such as lack/excess of illumination, extreme contrast in the scene, etc. On the other hand, with HDR, there might be issues related to increased privacy intrusion. To display the HDR content on the regular screen, tone-mapping operators (TMO) are used. In this paper, we present the universal method for TMO parameters tuning, in order to maintain as many details as possible, which is desirable in security applications. The method's performance is verified on several TMOs by comparing the outcomes from tone-mapping with default and optimized parameters. The results suggest that the proposed approach preserves more information which could be of advantage for security surveillance but, on the other hand, makes us consider possible increase in privacy intrusion.

  6. EMG/ECG Acquisition System with Online Adjustable Parameters Using ZigBee Wireless Technology

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroyuki

    This paper deals with a novel wireless bio-signal acquisition system employing ZigBee wireless technology, which consists of mainly two components, that is, intelligent electrode and data acquisition host. The former is the main topic of this paper. It is put on a subject's body to amplify bio-signal such as EMG or ECG and stream its data at upto 2 ksps. One of the most remarkable feature of the intelligent electrode is that it can change its own parameters including both digital and analog ones on-line. The author describes its design first, then introduces a small, light and low cost implementation of the intelligent electrode named as “VAMPIRE-BAT.” And he show some experimental results to confirm its usability and to estimate its practical performances.

  7. Parameter Estimation for Viscoplastic Material Modeling

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.

    1997-01-01

    A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.

  8. Effects of model deficiencies on parameter estimation

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.

    1988-01-01

    Reliable structural dynamic models will be required as a basis for deriving the reduced-order plant models used in control systems for large space structures. Ground vibration testing and model verification will play an important role in the development of these models; however, fundamental differences between the space environment and earth environment, as well as variations in structural properties due to as-built conditions, will make on-orbit identification essential. The efficiency, and perhaps even the success, of on-orbit identification will depend on having a valid model of the structure. It is envisioned that the identification process will primarily involve parametric methods. Given a correct model, a variety of estimation algorithms may be used to estimate parameter values. This paper explores the effects of modeling errors and model deficiencies on parameter estimation by reviewing previous case histories. The effects depend at least to some extent on the estimation algorithm being used. Bayesian estimation was used in the case histories presented here. It is therefore conceivable that the behavior of an estimation algorithm might be useful in detecting and possibly even diagnosing deficiencies. In practice, the task is complicated by the presence of systematic errors in experimental procedures and data processing and in the use of the estimation procedures themselves.

  9. Multiple confidence intervals for selected parameters adjusted for the false coverage rate in monotone dose-response microarray experiments.

    PubMed

    Peng, Jianan; Liu, Wei; Bretz, Frank; Shkedy, Ziv

    2017-07-01

    Benjamini and Yekutieli () introduced the concept of the false coverage-statement rate (FCR) to account for selection when the confidence intervals (CIs) are constructed only for the selected parameters. Dose-response analysis in dose-response microarray experiments is conducted only for genes having monotone dose-response relationship, which is a selection problem. In this paper, we consider multiple CIs for the mean gene expression difference between the highest dose and control in monotone dose-response microarray experiments for selected parameters adjusted for the FCR. A simulation study is conducted to study the performance of the method proposed. The method is applied to a real dose-response microarray experiment with 16, 998 genes for illustration. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A calibration adjustment technique combining ERB parameters from different remote sensing platforms into a long-term data set

    NASA Technical Reports Server (NTRS)

    Ardanuy, P. E.; Jacobowitz, H.

    1984-01-01

    Earth-radiation-budget (ERB) experiments on board the Nimbus-6 (ERB 6) and Nimbus-7 (ERB 7) spacecraft have measured wide-field-of-view total (0.2 to 50 micron), shortwave (0.2 to 3.8 micron), and near-infrared (0.7 to 2.8 micron) terrestrial irradiances for a joint lifetime of over 8 years. Though the spectral characteristics of both experiments are nearly identical, instrument degradation and altitude differences introduce discrepancies between the two data sets. ERB parameters from these two observing platforms may be combined into a scientifically meaningful data set only after these discrepancies are eliminated. To facilitate the creation of a long-term ERB data set, comparisons of the ERB 6 experiment irradiances with, and calibration adjustments with respect to, the corresponding ERB 7 irradiances have been performed. Two calibration methods were developed and applied to the irradiance data from 28 pairs of collocated orbits. The differential effects of altitude, illumination, albedo, and scene inhomogeneities were applied. The result is a set of calibration adjustments that adjust the ERB 6 data to match the ERB 7 data.

  11. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  12. Comparison of Inorganic Carbon System Parameters Measured in the Atlantic Ocean from 1990 to 1998 and Recommended Adjustments

    SciTech Connect

    Wanninkhof, R.

    2003-05-21

    As part of the global synthesis effort sponsored by the Global Carbon Cycle project of the National Oceanic and Atmospheric Administration (NOAA) and U.S. Department of Energy, a comprehensive comparison was performed of inorganic carbon parameters measured on oceanographic surveys carried out under auspices of the Joint Global Ocean Flux Study and related programs. Many of the cruises were performed as part of the World Hydrographic Program of the World Ocean Circulation Experiment and the NOAA Ocean-Atmosphere Carbon Exchange Study. Total dissolved inorganic carbon (DIC), total alkalinity (TAlk), fugacity of CO{sub 2}, and pH data from twenty-three cruises were checked to determine whether there were systematic offsets of these parameters between cruises. The focus was on the DIC and TAlk state variables. Data quality and offsets of DIC and TAlk were determined by using several different techniques. One approach was based on crossover analyses, where the deep-water concentrations of DIC and TAlk were compared for stations on different cruises that were within 100 km of each other. Regional comparisons were also made by using a multiple-parameter linear regression technique in which DIC or TAlk was regressed against hydrographic and nutrient parameters. When offsets of greater than 4 {micro}mol/kg were observed for DIC and/or 6 {micro}mol/kg were observed for TAlk, the data taken on the cruise were closely scrutinized to determine whether the offsets were systematic. Based on these analyses, the DIC data and TAlk data of three cruises were deemed of insufficient quality to be included in the comprehensive basinwide data set. For several of the cruises, small adjustments in TAlk were recommended for consistency with other cruises in the region. After these adjustments were incorporated, the inorganic carbon data from all cruises along with hydrographic, chlorofluorocarbon, and nutrient data were combined as a research quality product for the scientific community.

  13. Adjusting exposure limits for long and short exposure periods using a physiological pharmacokinetic model.

    PubMed

    Andersen, M E; MacNaughton, M G; Clewell, H J; Paustenbach, D J

    1987-04-01

    The rationale for adjusting occupational exposure limits for unusual work schedules is to assure, as much as possible, that persons on these schedules are placed at no greater risk of injury or discomfort than persons who work a standard 8 hr/day, 40 hr/week. For most systemic toxicants, the risk index upon which the adjustments are made will be either peak blood concentration or integrated tissue dose, depending on what chemical's presumed mechanism of toxicity. Over the past ten years, at least four different models have been proposed for adjusting exposure limits for unusually short and long work schedules. This paper advocates use of a physiologically-based pharmacokinetic (PB-PK) model for determining adjustment factors for unusual exposure schedules, an approach that should be more accurate than those proposed previously. The PB-PK model requires data on the blood:air and tissue:blood partition coefficients, the rate of metabolism of the chemical, organ volumes, organ blood flows and ventilation rates in humans. Laboratory data on two industrially important chemicals--styrene and methylene chloride--were used to illustrate the PB-PK approach. At inhaled concentrations near their respective 8-hr Threshold Limit Value-Time-weighted averages (TLV-TWAs), both of these chemicals are primarily eliminated from the body by metabolism. For these two chemicals, the appropriate risk indexing parameters are integrated tissue dose or total amount of parent chemical metabolized. Since methylene chloride is metabolized to carbon monoxide, the maximum blood carboxyhemoglobin concentrations also might be useful as an index of risk for this chemical.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 1996

    NASA Astrophysics Data System (ADS)

    Sovers, O. J.; Jacobs, Christopher S.

    1996-08-01

    al., as well as a larger selection of recent models for the diurnal and semidiurnal bands. The ever-lengthening time base of VLBI results will soon require modeling the rotation of the Galaxy. Effects of such long period motion of the Earth against the background of extragalactic sources are now included in the MODEST model. More details are given concerning antenna subreflector focussing. Amendments to the tropospheric modeling include the option to model the latitude dependence of the Earth's curvature and gravity in the Lanyi mapping function, as well as to set the mapping function adjustable parameters via a standard gobble atmosphere model. The Ifadis and NMF mapping functions, and the capability of estimating azimuthal troposphere gradients are also available. Finally, a number of m nor misprints in Revision 5 are corrected.

  15. Laser-plasma SXR/EUV sources: adjustment of radiation parameters for specific applications

    NASA Astrophysics Data System (ADS)

    Bartnik, A.; Fiedorowicz, H.; Fok, T.; Jarocki, R.; Kostecki, J.; Szczurek, A.; Szczurek, M.; Wachulak, P.; Wegrzyński, Ł.

    2014-12-01

    In this work soft X-ray (SXR) and extreme ultraviolet (EUV) laser-produced plasma (LPP) sources employing Nd:YAG laser systems of different parameters are presented. First of them is a 10-Hz EUV source, based on a double-stream gaspuff target, irradiated with the 3-ns/0.8J laser pulse. In the second one a 10 ns/10 J/10 Hz laser system is employed and the third one utilizes the laser system with the pulse shorten to approximately 1 ns. Using various gases in the gas puff targets it is possible to obtain intense radiation in different wavelength ranges. This way intense continuous radiation in a wide spectral range as well as quasi-monochromatic radiation was produced. To obtain high EUV or SXR fluence the radiation was focused using three types of grazing incidence collectors and a multilayer Mo/Si collector. First of them is a multfoil gold plated collector consisted of two orthogonal stacks of ellipsoidal mirrors forming a double-focusing device. The second one is the ellipsoidal collector being part of the axisymmetrical ellipsoidal surface. Third of the collectors is composed of two aligned axisymmetrical paraboloidal mirrors optimized for focusing of SXR radiation. The last collector is an off-axis ellipsoidal multilayer Mo/Si mirror allowing for efficient focusing of the radiation in the spectral region centered at λ = 13.5 ± 0.5 nm. In this paper spectra of unaltered EUV or SXR radiation produced in different LPP source configurations together with spectra and fluence values of focused radiation are presented. Specific configurations of the sources were assigned to various applications.

  16. Reliability of parameter estimation in respirometric models.

    PubMed

    Checchi, Nicola; Marsili-Libelli, Stefano

    2005-09-01

    When modelling a biochemical system, the fact that model parameters cannot be estimated exactly stimulates the definition of tests for checking unreliable estimates and design better experiments. The method applied in this paper is a further development from Marsili-Libelli et al. [2003. Confidence regions of estimated parameters for ecological systems. Ecol. Model. 165, 127-146.] and is based on the confidence regions computed with the Fisher or the Hessian matrix. It detects the influence of the curvature, representing the distortion of the model response due to its nonlinear structure. If the test is passed then the estimation can be considered reliable, in the sense that the optimisation search has reached a point on the error surface where the effect of nonlinearities is negligible. The test is used here for an assessment of respirometric model calibration, i.e. checking the experimental design and estimation reliability, with an application to real-life data in the ASM context. Only dissolved oxygen measurements have been considered, because this is a very popular experimental set-up in wastewater modelling. The estimation of a two-step nitrification model using batch respirometric data is considered, showing that the initial amount of ammonium-N and the number of data play a crucial role in obtaining reliable estimates. From this basic application other results are derived, such as the estimation of the combined yield factor and of the second step parameters, based on a modified kinetics and a specific nitrite experiment. Finally, guidelines for designing reliable experiments are provided.

  17. Constant-parameter capture-recapture models

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  18. Adjustment of regional climate model output for modeling the climatic mass balance of all glaciers on Svalbard.

    PubMed

    Möller, Marco; Obleitner, Friedrich; Reijmer, Carleen H; Pohjola, Veijo A; Głowacki, Piotr; Kohler, Jack

    2016-05-27

    Large-scale modeling of glacier mass balance relies often on the output from regional climate models (RCMs). However, the limited accuracy and spatial resolution of RCM output pose limitations on mass balance simulations at subregional or local scales. Moreover, RCM output is still rarely available over larger regions or for longer time periods. This study evaluates the extent to which it is possible to derive reliable region-wide glacier mass balance estimates, using coarse resolution (10 km) RCM output for model forcing. Our data cover the entire Svalbard archipelago over one decade. To calculate mass balance, we use an index-based model. Model parameters are not calibrated, but the RCM air temperature and precipitation fields are adjusted using in situ mass balance measurements as reference. We compare two different calibration methods: root mean square error minimization and regression optimization. The obtained air temperature shifts (+1.43°C versus +2.22°C) and precipitation scaling factors (1.23 versus 1.86) differ considerably between the two methods, which we attribute to inhomogeneities in the spatiotemporal distribution of the reference data. Our modeling suggests a mean annual climatic mass balance of -0.05 ± 0.40 m w.e. a(-1) for Svalbard over 2000-2011 and a mean equilibrium line altitude of 452 ± 200 m  above sea level. We find that the limited spatial resolution of the RCM forcing with respect to real surface topography and the usage of spatially homogeneous RCM output adjustments and mass balance model parameters are responsible for much of the modeling uncertainty. Sensitivity of the results to model parameter uncertainty is comparably small and of minor importance.

  19. CHAMP: Changepoint Detection Using Approximate Model Parameters

    DTIC Science & Technology

    2014-06-01

    detecting changes in the parameters and mod- els that generate observed data. Commonly cited examples include detecting changes in stock market behavior [4...experimentally verified using artifi- cially generated data and are compared to those of Fearnhead and Liu [5]. 2 Related work Hidden Markov Models ( HMMs ) are...largely the de facto tool of choice when analyzing time series data, but the standard HMM formulation has several undesirable properties. The number of

  20. Modelling tourists arrival using time varying parameter

    NASA Astrophysics Data System (ADS)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  1. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2017-08-29

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  2. Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools

    NASA Astrophysics Data System (ADS)

    Kollo, Karin; Spada, Giorgio; Vermeer, Martin

    2013-04-01

    Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every

  3. Sensitivity assessment, adjustment, and comparison of mathematical models describing the migration of pesticides in soil using lysimetric data

    NASA Astrophysics Data System (ADS)

    Shein, E. V.; Kokoreva, A. A.; Gorbatov, V. S.; Umarova, A. B.; Kolupaeva, V. N.; Perevertin, K. A.

    2009-07-01

    The water block of physically founded models of different levels (chromatographic PEARL models and dual-porosity MACRO models) was parameterized using laboratory experimental data and tested using the results of studying the water regime of loamy soddy-podzolic soil in large lysimeters of the Experimental Soil Station of Moscow State University. The models were adapted using a stepwise approach, which involved the sequential assessment and adjustment of each submodel. The models unadjusted for the water block underestimated the lysimeter flow and overestimated the soil water content. The theoretical necessity of the model adjustment was explained by the different scales of the experimental objects (soil samples) and simulated phenomenon (soil profile). The adjustment of the models by selecting the most sensitive hydrophysical parameters of the soils (the approximation parameters of the soil water retention curve (SWRC)) gave good agreement between the predicted moisture profiles and their actual values. In distinction from the PEARL model, the MARCO model reliably described the migration of a pesticide through the soil profile, which confirmed the necessity of physically founded models accounting for the separation of preferential flows in the pore space for the prediction, analysis, optimization, and management of modern agricultural technologies.

  4. Delayed heart rate recovery after exercise as a risk factor of incident type 2 diabetes mellitus after adjusting for glycometabolic parameters in men.

    PubMed

    Yu, Tae Yang; Jee, Jae Hwan; Bae, Ji Cheol; Hong, Won-Jung; Jin, Sang-Man; Kim, Jae Hyeon; Lee, Moon-Kyu

    2016-10-15

    Some studies have reported that delayed heart rate recovery (HRR) after exercise is associated with incident type 2 diabetes mellitus (T2DM). This study aimed to investigate the longitudinal association of delayed HRR following a graded exercise treadmill test (GTX) with the development of T2DM including glucose-associated parameters as an adjusting factor in healthy Korean men. Analyses including fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c as confounding factors and known confounders were performed. HRR was calculated as peak heart rate minus heart rate after a 1-min rest (HRR 1). Cox proportional hazards model was used to quantify the independent association between HRR and incident T2DM. During 9082 person-years of follow-up between 2006 and 2012, there were 180 (10.1%) incident cases of T2DM. After adjustment for age, BMI, systolic BP, diastolic BP, smoking status, peak heart rate, peak oxygen uptake, TG, LDL-C, HDL-C, fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c, the hazard ratios (HRs) [95% confidence interval (CI)] of incident T2DM comparing the second and third tertiles to the first tertile of HRR 1 were 0.867 (0.609-1.235) and 0.624 (0.426-0.915), respectively (p for trend=0.017). As a continuous variable, in the fully-adjusted model, the HR (95% CI) of incident T2DM associated with each 1 beat increase in HRR 1 was 0.980 (0.960-1.000) (p=0.048). This study demonstrated that delayed HRR after exercise predicts incident T2DM in men, even after adjusting for fasting glucose, HOMA-IR, HOMA-β, and HbA1c. However, only HRR 1 had clinical significance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. A whale better adjusts the biosonar to ordered rather than to random changes in the echo parameters.

    PubMed

    Supin, Alexander Ya; Nachtigall, Paul E; Breese, Marlee

    2012-09-01

    A false killer whale's (Pseudorca crassidens) sonar clicks and auditory evoked potentials (AEPs) were recorded during echolocation with simulated echoes in two series of experiments. In the first, both the echo delay and transfer factor (which is the dB-ratio of the echo sound-pressure level to emitted pulse source level) were varied randomly from trial to trial until enough data were collected (random presentation). In the second, a combination of the echo delay and transfer factor was kept constant until enough data were collected (ordered presentation). The mean click level decreased with shortening the delay and increasing the transfer factor, more at the ordered presentation rather than at the random presentation. AEPs to the self-heard emitted clicks decreased with shortening the delay and increasing the echo level equally in both series. AEPs to echoes increased with increasing the echo level, little dependent on the echo delay at random presentations but much more dependent on delay with ordered presentations. So some adjustment of the whale's biosonar was possible without prior information about the echo parameters; however, the availability of prior information about echoes provided additional whale capabilities to adjust both the transmitting and receiving parts of the biosonar.

  6. Adjusting kinematics and kinetics in a feedback-controlled toe walking model

    PubMed Central

    2012-01-01

    Background In clinical gait assessment, the correct interpretation of gait kinematics and kinetics has a decisive impact on the success of the therapeutic programme. Due to the vast amount of information from which primary anomalies should be identified and separated from secondary compensatory changes, as well as the biomechanical complexity and redundancy of the human locomotion system, this task is considerably challenging and requires the attention of an experienced interdisciplinary team of experts. The ongoing research in the field of biomechanics suggests that mathematical modeling may facilitate this task. This paper explores the possibility of generating a family of toe walking gait patterns by systematically changing selected parameters of a feedback-controlled model. Methods From the selected clinical case of toe walking we identified typical toe walking characteristics and encoded them as a set of gait-oriented control objectives to be achieved in a feedback-controlled walking model. They were defined as fourth order polynomials and imposed via feedback control at the within-step control level. At the between-step control level, stance leg lengthening velocity at the end of the single support phase was adaptively adjusted after each step so as to facilitate gait velocity control. Each time the gait velocity settled at the desired value, selected intra-step gait characteristics were modified by adjusting the polynomials so as to mimic the effect of a typical therapeutical intervention - inhibitory casting. Results By systematically adjusting the set of control parameters we were able to generate a family of gait kinematic and kinetic patterns that exhibit similar principal toe walking characteristics, as they were recorded by means of an instrumented gait analysis system in the selected clinical case of toe walking. We further acknowledge that they to some extent follow similar improvement tendencies as those which one can identify in gait kinematics and

  7. Detailed analysis of charge transport in amorphous organic thin layer by multiscale simulation without any adjustable parameters

    PubMed Central

    Uratani, Hiroki; Kubo, Shosei; Shizu, Katsuyuki; Suzuki, Furitsu; Fukushima, Tatsuya; Kaji, Hironori

    2016-01-01

    Hopping-type charge transport in an amorphous thin layer composed of organic molecules is simulated by the combined use of molecular dynamics, quantum chemical, and Monte Carlo calculations. By explicitly considering the molecular structure and the disordered intermolecular packing, we reasonably reproduce the experimental hole and electron mobilities and their applied electric field dependence (Poole–Frenkel behaviour) without using any adjustable parameters. We find that the distribution of the density-of-states originating from the amorphous nature has a significant impact on both the mobilities and Poole–Frenkel behaviour. Detailed analysis is also provided to reveal the molecular-level origin of the charge transport, including the origin of Poole–Frenkel behaviour. PMID:28000728

  8. Detailed analysis of charge transport in amorphous organic thin layer by multiscale simulation without any adjustable parameters

    NASA Astrophysics Data System (ADS)

    Uratani, Hiroki; Kubo, Shosei; Shizu, Katsuyuki; Suzuki, Furitsu; Fukushima, Tatsuya; Kaji, Hironori

    2016-12-01

    Hopping-type charge transport in an amorphous thin layer composed of organic molecules is simulated by the combined use of molecular dynamics, quantum chemical, and Monte Carlo calculations. By explicitly considering the molecular structure and the disordered intermolecular packing, we reasonably reproduce the experimental hole and electron mobilities and their applied electric field dependence (Poole–Frenkel behaviour) without using any adjustable parameters. We find that the distribution of the density-of-states originating from the amorphous nature has a significant impact on both the mobilities and Poole–Frenkel behaviour. Detailed analysis is also provided to reveal the molecular-level origin of the charge transport, including the origin of Poole–Frenkel behaviour.

  9. Moose models with vanishing S parameter

    SciTech Connect

    Casalbuoni, R.; De Curtis, S.; Dominici, D.

    2004-09-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2){sub L} and U(1){sub Y} at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric.

  10. Model parameters for simulation of physiological lipids

    PubMed Central

    McGlinchey, Nicholas

    2016-01-01

    Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972

  11. Uncertainty quantification for optical model parameters

    DOE PAGES

    Lovell, A. E.; Nunes, F. M.; Sarich, J.; ...

    2017-02-21

    Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of our work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fitmore » and create corresponding 95% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. Here, we study a number of reactions involving neutron and deuteron projectiles with energies in the range of 5–25 MeV/u, on targets with mass A=12–208. We investigate the correlations between the parameters in the fit. The case of deuterons on 12C is discussed in detail: the elastic-scattering fit and the prediction of 12C(d,p)13C transfer angular distributions, using both uncorrelated and correlated χ2 minimization functions. The general features for all cases are compiled in a systematic manner to identify trends. This work shows that, in many cases, the correlated χ2 functions (in comparison to the uncorrelated χ2 functions) provide a more natural parameterization of the process. These correlated functions do, however, produce broader confidence bands. Further optimization may require improvement in the models themselves and/or more information included in the fit.« less

  12. Women's Work Conditions and Marital Adjustment in Two-Earner Couples: A Structural Model.

    ERIC Educational Resources Information Center

    Sears, Heather A.; Galambos, Nancy L.

    1992-01-01

    Evaluated structural model of women's work conditions, women's stress, and marital adjustment using path analysis. Findings from 86 2-earner couples with adolescents indicated support for spillover model in which women's work stress and global stress mediated link between their work conditions and their perceptions of marital adjustment.…

  13. A resampling approach for adjustment in prediction models for covariate measurement error.

    PubMed

    Li, Wei; Mazumdar, Sati; Arena, Vincent C; Sussman, Nancy

    2005-03-01

    Recent works on covariate measurement errors focus on the possible biases in model coefficient estimates. Usually, measurement error in a covariate tends to attenuate the coefficient estimate for the covariate, i.e., a bias toward the null occurs. Measurement error in another confounding or interacting variable typically results in incomplete adjustment for that variable. Hence, the coefficient for the covariate of interest may be biased either toward or away from the null. This paper presents a new method based on a resampling technique to deal with covariate measurement errors in the context of prediction modeling. Prediction accuracy is our primary parameter of interest. Prediction accuracy of a model is defined as the success rate of prediction when the model predicts new response. We call our method bootstrap regression calibration (BRC). We study logistic regression with interacting covariates as our prediction model. We measure the prediction accuracy of a model by receiver operating characteristic (ROC) method. Results from simulations show that bootstrap regression calibration offers consistent enhancement over the commonly used regression calibration (RC) method in terms of improving prediction accuracy of the model and reducing bias in the estimated coefficients.

  14. TruMicro Series 2000 sub-400 fs class industrial fiber lasers: adjustment of laser parameters to process requirements

    NASA Astrophysics Data System (ADS)

    Kanal, Florian; Kahmann, Max; Tan, Chuong; Diekamp, Holger; Jansen, Florian; Scelle, Raphael; Budnicki, Aleksander; Sutter, Dirk

    2017-02-01

    The matchless properties of ultrashort laser pulses, such as the enabling of cold processing and non-linear absorption, pave the way to numerous novel applications. Ultrafast lasers arrived in the last decade at a level of reliability suitable for the industrial environment.1 Within the next years many industrial manufacturing processes in several markets will be replaced by laser-based processes due to their well-known benefits: These are non-contact wear-free processing, higher process accuracy or an increase of processing speed and often improved economic efficiency compared to conventional processes. Furthermore, new processes will arise with novel sources, addressing previously unsolved challenges. One technical requirement for these exciting new applications will be to optimize the large number of available parameters to the requirements of the application. In this work we present an ultrafast laser system distinguished by its capability to combine high flexibility and real time process-inherent adjustments of the parameters with industry-ready reliability. This industry-ready reliability is ensured by a long experience in designing and building ultrashort-pulse lasers in combination with rigorous optimization of the mechanical construction, optical components and the entire laser head for continuous performance. By introducing a new generation of mechanical design in the last few years, TRUMPF enabled its ultrashort-laser platforms to fulfill the very demanding requirements for passively coupling high-energy single-mode radiation into a hollow-core transport fiber. The laser architecture presented here is based on the all fiber MOPA (master oscillator power amplifier) CPA (chirped pulse amplification) technology. The pulses are generated in a high repetition rate mode-locked fiber oscillator also enabling flexible pulse bursts (groups of multiple pulses) with 20 ns intra-burst pulse separation. An external acousto-optic modulator (XAOM) enables linearization

  15. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  16. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    SciTech Connect

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V. Tkachenko, N. P.

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  17. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    NASA Astrophysics Data System (ADS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  18. Modelling of snow avalanche dynamics: influence of model parameters

    NASA Astrophysics Data System (ADS)

    Bozhinskiy, A. N.

    The three-parameter hydraulic model of snow avalanche dynamics including the coefficients of dry and turbulent friction and the coefficient of new-snow-mass entrainment was investigated. The 'Domestic' avalanche site in Elbrus region, Caucasus, Russia, was chosen as the model avalanche range. According to the model, the fixed avalanche run-out can be achieved with various combinations of model parameters. At the fixed value of the coefficient of entrainment me, we have a curve on a plane of the coefficients of dry and turbulent friction. It was found that the family of curves (me is a parameter) are crossed at the single point. The value of the coefficient of turbulent friction at the cross-point remained practically constant for the maximum and average avalanche run-outs. The conclusions obtained are confirmed by the results of modelling for six arbitrarily chosen avalanche sites: three in the Khibiny mountains, Kola Peninsula, Russia, two in the Elbrus region and one idealized site with an exponential longitudinal profile. The dependences of run-out on the coefficient of dry friction are constructed for all the investigated avalanche sites. The results are important for the statistical simulation of avalanche dynamics since they suggest the possibility of using only one random model parameter, namely, the coefficient of dry friction, in the model. The histograms and distribution functions of the coefficient of dry friction are constructed and presented for avalanche sites Nos 22 and 43 (Khibiny mountains) and 'Domestic', with the available series of field data.

  19. Estimation of Parameters in Latent Class Models with Constraints on the Parameters.

    DTIC Science & Technology

    1986-06-01

    parameter. This rules out models which characterize each Pkj in terms of conjoint effects of item and state parameters, as the Rasch model does, for... example . It also rules out models that impose ordering constraints on the Pkj’S. Thus, while many interesting models can be cast in terms of equality...ONR86-1 m 4 4. TITLE (mnd Subtitle) S. TYPE OF REPORT & PERIOD COVERED Estimation of Parameters in Latent Class Models with Constraints on the

  20. Multiscale modeling of failure in composites under model parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Bogdanor, Michael J.; Oskay, Caglar; Clay, Stephen B.

    2015-09-01

    This manuscript presents a multiscale stochastic failure modeling approach for fiber reinforced composites. A homogenization based reduced-order multiscale computational model is employed to predict the progressive damage accumulation and failure in the composite. Uncertainty in the composite response is modeled at the scale of the microstructure by considering the constituent material (i.e., matrix and fiber) parameters governing the evolution of damage as random variables. Through the use of the multiscale model, randomness at the constituent scale is propagated to the scale of the composite laminate. The probability distributions of the underlying material parameters are calibrated from unidirectional composite experiments using a Bayesian statistical approach. The calibrated multiscale model is exercised to predict the ultimate tensile strength of quasi-isotropic open-hole composite specimens at various loading rates. The effect of random spatial distribution of constituent material properties on the composite response is investigated.

  1. Development of a winter wheat adjustable crop calendar model

    NASA Technical Reports Server (NTRS)

    Baker, J. R. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. After parameter estimation, tests were conducted with variances from the fits, and on independent data. From these tests, it was generally concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson's triquadratic form, in general use for spring wheat, was found to show promise for winter wheat, but special techniques and care were required for its use. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with daily environmental values as independent variables.

  2. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    USGS Publications Warehouse

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  3. Gait parameter adjustments for walking on a treadmill at preferred, slower, and faster speeds in older adults with down syndrome.

    PubMed

    Smith, Beth A; Kubo, Masayoshi; Ulrich, Beverly D

    2012-01-01

    The combined effects of ligamentous laxity, hypotonia, and decrements associated with aging lead to stability-enhancing foot placement adaptations during routine overground walking at a younger age in adults with Down syndrome (DS) compared to their peers with typical development (TD). Our purpose here was to examine real-time adaptations in older adults with DS by testing their responses to walking on a treadmill at their preferred speed and at speeds slower and faster than preferred. We found that older adults with DS were able to adapt their gait to slower and faster than preferred treadmill speeds; however, they maintained their stability-enhancing foot placements at all speeds compared to their peers with TD. All adults adapted their gait patterns similarly in response to faster and slower than preferred treadmill-walking speeds. They increased stride frequency and stride length, maintained step width, and decreased percent stance as treadmill speed increased. Older adults with DS, however, adjusted their stride frequencies significantly less than their peers with TD. Our results show that older adults with DS have the capacity to adapt their gait parameters in response to different walking speeds while also supporting the need for intervention to increase gait stability.

  4. An impaired driver model for safe driving by control of vehicle parameters

    NASA Astrophysics Data System (ADS)

    Phuc Le, Thanh; Erdem Sahin, Davut; Stiharu, Ion

    2013-03-01

    This paper presents the results of the investigation on a driver model that can be adjusted to perform the role of an impaired driver (especially, an alcohol-affected driver) who exhibits the deterioration in driving skills in correlation with a specific level of impairment. The linear vehicle model providing lateral displacement and yaw is coupled with the driver model that is derived as a linear quadratic regulator with delay. The decrement of performance is modelled by decreasing parameters that are preview time, visual perception, control gains and increasing reaction time. By comparing the standard deviation of the lateral position between the model and the real driver, the performance of the driver model impaired at the blood alcohol concentrations of 0.05%, 0.08% and 0.11% results in deteriorations of 21%, 26% and 30%, respectively. The lateral error is reduced if the vehicle parameters are adjusted to adapt to the impaired driver model.

  5. Examining Competing Models of the Associations among Peer Victimization, Adjustment Problems, and School Connectedness

    ERIC Educational Resources Information Center

    Loukas, Alexandra; Ripperger-Suhler, Ken G.; Herrera, Denise E.

    2012-01-01

    The present study tested two competing models to assess whether psychosocial adjustment problems mediate the associations between peer victimization and school connectedness one year later, or if peer victimization mediates the associations between psychosocial adjustment problems and school connectedness. Participants were 500 10- to 14-year-old…

  6. Fixing the c Parameter in the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…

  7. Fixing the c Parameter in the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…

  8. Modeling of an Adjustable Beam Solid State Light Project

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  9. The relationship of values to adjustment in illness: a model for nursing practice.

    PubMed

    Harvey, R M

    1992-04-01

    This paper proposes a model of the relationship between values, in particular health value, and adjustment to illness. The importance of values as well as the need for value change are described in the literature related to adjustment to physical disability and chronic illness. An empirical model, however, that explains the relationship of values to adjustment or adaptation has not been found by this researcher. Balance theory and its application to the abstract and perceived cognitions of health value and health perception are described here to explain the relationship of values like health value to outcomes associated with adjustment or adaptation to illness. The proposed model is based on the balance theories of Heider, Festinger and Feather. Hypotheses based on the model were tested and supported in a study of 100 adults with visible and invisible chronic illness. Nursing interventions based on the model are described and suggestions for further research discussed.

  10. Spherical Model Integrating Academic Competence with Social Adjustment and Psychopathology.

    ERIC Educational Resources Information Center

    Schaefer, Earl S.; And Others

    This study replicates and elaborates a three-dimensional, spherical model that integrates research findings concerning social and emotional behavior, psychopathology, and academic competence. Kindergarten teachers completed an extensive set of rating scales on 100 children, including the Classroom Behavior Inventory and the Child Adaptive Behavior…

  11. [Calculation of parameters in forest evapotranspiration model].

    PubMed

    Wang, Anzhi; Pei, Tiefan

    2003-12-01

    Forest evapotranspiration is an important component not only in water balance, but also in energy balance. It is a great demand for the development of forest hydrology and forest meteorology to simulate the forest evapotranspiration accurately, which is also a theoretical basis for the management and utilization of water resources and forest ecosystem. Taking the broadleaved Korean pine forest on Changbai Mountain as an example, this paper constructed a mechanism model for estimating forest evapotranspiration, based on the aerodynamic principle and energy balance equation. Using the data measured by the Routine Meteorological Measurement System and Open-Path Eddy Covariance Measurement System mounted on the tower in the broadleaved Korean pine forest, the parameters displacement height d, stability functions for momentum phi m, and stability functions for heat phi h were ascertained. The displacement height of the study site was equal to 17.8 m, near to the mean canopy height, and the functions of phi m and phi h changing with gradient Richarson number R i were constructed.

  12. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  13. Adjustment problems and maladaptive relational style: a mediational model of sexual coercion in intimate relationships.

    PubMed

    Salwen, Jessica K; O'Leary, K Daniel

    2013-07-01

    Four hundred and fifty-three married or cohabitating couples participated in the current study. A meditational model of men's perpetration of sexual coercion within an intimate relationship was examined based on past theories and known correlates of rape and sexual coercion. The latent constructs of adjustment problems and maladaptive relational style were examined. Adjustment problem variables included perceived stress, perceived low social support, and marital discord. Maladaptive relational style variables included psychological aggression, dominance, and jealousy. Sexual coercion was a combined measure of men's reported perpetration and women's reported victimization. As hypothesized, adjustment problems significantly predicted sexual coercion. Within the meditational model, adjustment problems were significantly correlated with maladaptive relational style, and maladaptive relational style significantly predicted sexual coercion. Once maladaptive relational style was introduced as a mediator, adjustment problems no longer significantly predicted sexual coercion. Implications for treatment, limitations, and future research are discussed.

  14. Development of a charge adjustment model for cardiac catheterization.

    PubMed

    Brennan, Andrew; Gauvreau, Kimberlee; Connor, Jean; O'Connell, Cheryl; David, Sthuthi; Almodovar, Melvin; DiNardo, James; Banka, Puja; Mayer, John E; Marshall, Audrey C; Bergersen, Lisa

    2015-02-01

    A methodology that would allow for comparison of charges across institutions has not been developed for catheterization in congenital heart disease. A single institution catheterization database with prospectively collected case characteristics was linked to hospital charges related and limited to an episode of care in the catheterization laboratory for fiscal years 2008-2010. Catheterization charge categories (CCC) were developed to group types of catheterization procedures using a combination of empiric data and expert consensus. A multivariable model with outcome charges was created using CCC and additional patient and procedural characteristics. In 3 fiscal years, 3,839 cases were available for analysis. Forty catheterization procedure types were categorized into 7 CCC yielding a grouper variable with an R (2) explanatory value of 72.6%. In the final CCC, the largest proportion of cases was in CCC 2 (34%), which included diagnostic cases without intervention. Biopsy cases were isolated in CCC 1 (12%), and percutaneous pulmonary valve placement alone made up CCC 7 (2%). The final model included CCC, number of interventions, and cardiac diagnosis (R (2) = 74.2%). Additionally, current financial metrics such as APR-DRG severity of illness and case mix index demonstrated a lack of correlation with CCC. We have developed a catheterization procedure type financial grouper that accounts for the diverse case population encountered in catheterization for congenital heart disease. CCC and our multivariable model could be used to understand financial characteristics of a population at a single point in time, longitudinally, and to compare populations.

  15. A New Climate Adjustment Tool: An update to EPA’s Storm Water Management Model

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations.

  16. A New Climate Adjustment Tool: An update to EPA’s Storm Water Management Model

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations.

  17. Transfer function modeling of damping mechanisms in distributed parameter models

    NASA Technical Reports Server (NTRS)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  18. A Disequilibrium Adjustment Mechanism for CPE Macroeconometric Models: Initial Testing on SOVMOD.

    DTIC Science & Technology

    1979-02-01

    062 1H0 SRI INTERNATIONAL ARLINGTON VA STRATEGIC STUDIES CENTER F/ T 5/3 DISEQUILIBRIUM ADJUSTMENT MECHANISM FOR CPE MACROECOAI0AIETRIC -E (U) FEB...wC) u Approved for Review Distribution: 0 Richard B. Foster, Director Strategic Studies Center Approved for public release; distribution unlimited...describes work on the model aimed at facilitating the integration of a disequilibrium adjustment mechanism into the macroeconometric model. The

  19. Improvement of hydrological model calibration by selecting multiple parameter ranges

    NASA Astrophysics Data System (ADS)

    Wu, Qiaofeng; Liu, Shuguang; Cai, Yi; Li, Xinjian; Jiang, Yangming

    2017-01-01

    The parameters of hydrological models are usually calibrated to achieve good performance, owing to the highly non-linear problem of hydrology process modelling. However, parameter calibration efficiency has a direct relation with parameter range. Furthermore, parameter range selection is affected by probability distribution of parameter values, parameter sensitivity, and correlation. A newly proposed method is employed to determine the optimal combination of multi-parameter ranges for improving the calibration of hydrological models. At first, the probability distribution was specified for each parameter of the model based on genetic algorithm (GA) calibration. Then, several ranges were selected for each parameter according to the corresponding probability distribution, and subsequently the optimal range was determined by comparing the model results calibrated with the different selected ranges. Next, parameter correlation and sensibility were evaluated by quantifying two indexes, RC Y, X and SE, which can be used to coordinate with the negatively correlated parameters to specify the optimal combination of ranges of all parameters for calibrating models. It is shown from the investigation that the probability distribution of calibrated values of any particular parameter in a Xinanjiang model approaches a normal or exponential distribution. The multi-parameter optimal range selection method is superior to the single-parameter one for calibrating hydrological models with multiple parameters. The combination of optimal ranges of all parameters is not the optimum inasmuch as some parameters have negative effects on other parameters. The application of the proposed methodology gives rise to an increase of 0.01 in minimum Nash-Sutcliffe efficiency (ENS) compared with that of the pure GA method. The rising of minimum ENS with little change of the maximum may shrink the range of the possible solutions, which can effectively reduce uncertainty of the model performance.

  20. How patients experience progressive loss of visual function: a model of adjustment using qualitative methods

    PubMed Central

    Hayeems, R Z; Geller, G; Finkelstein, D; Faden, R R

    2005-01-01

    Background: People with retinitis pigmentosa (RP) experience functional and psychological challenges as they adjust to progressive loss of visual function. The authors aimed to understand better the process of adjusting to RP in light of the emotional suffering associated with this process. Methods: Adults with RP were recruited from the Foundation Fighting Blindness and the Wilmer Eye Institute in Baltimore. Focus groups and semistructured interviews addressed the process of adjusting to RP and were audiotaped and transcribed. The transcripts were analysed qualitatively in order to generate a model of adjustment. Results: A total of 43 individuals participated. It was found that, on diagnosis, people with RP seek to understand its meaning in their lives. Mastering the progressive functional implications associated with RP is contingent upon shifting personal identity from a sighted to a visually impaired person. In this sample, six participants self identified as sighted, 10 self identified as in transition, and 27 self identified as visually impaired. This adjustment process can be understood in terms of a five stage model of behaviour change. Conclusions: The proposed model presents one way to understand the process of adjusting to RP and could assist ophthalmologists in meeting their moral obligation to lessen patients’ suffering, which arises in the course of their adjustment to progressive loss of visual function. PMID:15834096

  1. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  2. Volume-independent elastance: a useful parameter for open-lung positive end-expiratory pressure adjustment.

    PubMed

    Carvalho, Alysson Roncally; Bergamini, Bruno Curty; Carvalho, Niedja S; Cagido, Viviane R; Neto, Alcendino C; Jandre, Frederico C; Zin, Walter A; Giannella-Neto, Antonio

    2013-03-01

    A decremental positive end-expiratory pressure (PEEP) trial after full lung recruitment allows for the adjustment of the lowest PEEP that prevents end-expiratory collapse (open-lung PEEP). For a tidal volume (Vt) approaching zero, the PEEP of minimum respiratory system elastance (PEEP(minErs)) is theoretically equal to the pressure at the mathematical inflection point (MIP) of the pressure-volume curve, and seems to correspond to the open-lung PEEP in a decremental PEEP trial. Nevertheless, the PEEP(minErs) is dependent on Vt and decreases as Vt increases. To circumvent this dependency, we proposed the use of a second-order model in which the volume-independent elastance (E1) is used to set open-lung PEEP. Pressure-volume curves and a recruitment maneuver followed by decremental PEEP trials, with a Vt of 6 and 12 mL/kg, were performed in 24 Wistar rats with acute lung injury induced by intraperitoneally injected (n = 8) or intratracheally instilled (n = 8) Escherichia coli lipopolysaccharide. In 8 control animals, the anterior chest wall was surgically removed after PEEP trials, and the protocol was repeated. Airway pressure (Paw) and flow (F) were continuously acquired and fitted by the linear single-compartment model (Paw = Rrs·F + Ers·V + PEEP, where Rrs is the resistance of the respiratory system, and V is volume) and the volume-dependent elastance model (Paw = Rrs·F + E1 + E2·V·V + PEEP, where E2·V is the volume-dependent elastance). From each model, PEEPs of minimum Ers and E1 (PEEP(minE1)) were identified and compared with each respective MIP. The accuracy of PEEPminE1 and PEEPminErs in estimating MIP was assessed by bias and precision plots. Comparisons among groups were performed with the unpaired t test whereas a paired t test was used between the control group before and after chest wall removal and within groups at different Vts. All P values were then corrected for multiple comparisons by the Bonferroni procedure. In all experimental groups

  3. An evaluation of bias in propensity score-adjusted non-linear regression models.

    PubMed

    Wan, Fei; Mitra, Nandita

    2016-04-19

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  4. On the compensation between cloud feedback and cloud adjustment in climate models

    NASA Astrophysics Data System (ADS)

    Chung, Eui-Seok; Soden, Brian J.

    2017-04-01

    Intermodel compensation between cloud feedback and rapid cloud adjustment has important implications for the range of model-inferred climate sensitivity. Although this negative intermodel correlation exists in both realistic (e.g., coupled ocean-atmosphere models) and idealized (e.g., aqua-planet) model configurations, the compensation appears to be stronger in the latter. The cause of the compensation between feedback and adjustment, and its dependence on model configuration remain poorly understood. In this study, we examine the characteristics of the cloud feedback and adjustment in model simulations with differing complexity, and analyze the causes responsible for their compensation. We show that in all model configurations, the intermodel compensation between cloud feedback and cloud adjustment largely results from offsetting changes in marine boundary-layer clouds. The greater prevalence of these cloud types in aqua-planet models is a likely contributor to the larger correlation between feedback and adjustment in those configurations. It is also shown that differing circulation changes in the aqua-planet configuration of some models act to amplify the intermodel range and sensitivity of the cloud radiative response by about a factor of 2.

  5. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  6. Use of Factorial Analysis to Determine the Interaction Between Parameters of a Land Surface Model

    NASA Astrophysics Data System (ADS)

    Varejão, C. G.; Varejão, E. V.; Costa, M. H.

    2007-05-01

    Land surface models use several parameters to represent biophysical processes. These parameters frequently are unknown, reproducing with uncertainty the characteristics of the ecosystem in study. Model calibration techniques find values for each parameter that reduce uncertainty. However, the calibration process is computationally expensive, since is necessary a lot of model runs to have their parameters adjusted. The more parameters are considered, more difficult the process is, particularly when there are interactions among them, and a modification in a parameter value implies in the change of the optimum value of the other parameters. The use of a factorial experiment allows the identification of possible inert parameters, whose values do not influence the final result of the experiment and, therefore, could be excluded from the calibration process. In this work we used factorial analysis to verify the existence of interaction among 5 parameters of the land surface IBIS model - Beta2 (distribution of fine roots), Vmax (maximum Rubisco enzyme capacity), m (coefficient related to the stomatal conductance), CHS (heat capacity of stems) and CHU (heat capacity of leaves) - evaluated against the output fluxes Rn (net radiation), H (sensible heat flux), LE (latent heat flux) and NEE (net ecosystem CO2 exchange). Data was collected at the Amazon tropical rainforest site known as K83, near Santarem, Brazil. The knowledge of the existing interactions between the parameters can considerably reduce the computational cost of further optimization processes, since each parameter that does not interact with others should be optimized independently.

  7. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  8. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  9. Comparison of the Properties of Regression and Categorical Risk-Adjustment Models

    PubMed Central

    Averill, Richard F.; Muldoon, John H.; Hughes, John S.

    2016-01-01

    Clinical risk-adjustment, the ability to standardize the comparison of individuals with different health needs, is based upon 2 main alternative approaches: regression models and clinical categorical models. In this article, we examine the impact of the differences in the way these models are constructed on end user applications. PMID:26945302

  10. Adjusting the Stems Regional Forest Growth Model to Improve Local Predictions

    Treesearch

    W. Brad Smith

    1983-01-01

    A simple procedure using double sampling is described for adjusting growth in the STEMS regional forest growth model to compensate for subregional variations. Predictive accuracy of the STEMS model (a distance-independent, individual tree growth model for Lake States forests) was improved by using this procedure

  11. NGA-West2 Empirical Fourier Model for Active Crustal Regions to Generate Regionally Adjustable Response Spectra

    NASA Astrophysics Data System (ADS)

    Bora, S. S.; Cotton, F.; Scherbaum, F.; Kuehn, N. M.

    2016-12-01

    Adjustment of median ground motion prediction equations (GMPEs) from data-rich (host) regions to data-poor regions (target) is one of major challenges that remains with the current practice of engineering seismology and seismic hazard analysis. Fourier spectral representation of ground motion provides a solution to address the problem of adjustment that is physically transparent and consistent with the concepts of linear system theory. Also, it provides a direct interface to appreciate the physically expected behavior of seismological parameters on ground motion. In the present study, we derive an empirical Fourier model for computing regionally adjustable response spectral ordinates based on random vibration theory (RVT) from shallow crustal earthquakes in active tectonic regions, following the approach of Bora et al. (2014, 2015). , For this purpose, we use an expanded NGA-West2 database with M 3.2—7.9 earthquakes at distances ranging from 0 to 300 km. A mixed-effects regression technique is employed to further explore various components of variability. The NGA-West2 database expanded over a wide magnitude range provides a better understanding (and constraint) of source scaling of ground motion. The large global volume of the database also allows investigating regional patterns in distance-dependent attenuation (i.e., geometrical spreading and inelastic attenuation) of ground motion as well as in the source parameters (e.g., magnitude and stress drop). Furthermore, event-wise variability and its correlation with stress parameter are investigated. Finally, application of the derived Fourier model in generating adjustable response spectra will be shown.

  12. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    PubMed Central

    Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao

    2016-01-01

    With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified. PMID:27706076

  13. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel.

    PubMed

    Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao

    2016-10-01

    With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  14. Development of a nursing model for life adjustment in patients with lung cancer in Japan.

    PubMed

    Horii, Naoko; Maekawa, Atsuko

    2013-09-01

    The aim of this study was to obtain basic data about the support for life adjustment in lung cancer patients in Japan. We identified factors that affect life adjustment in people with lung cancer, developed a model for life adjustment support of lung cancer patients, and investigated its validity. A survey was conducted using self-completed questionnaires, and responses were received by 203 individuals. Analysis of the responses revealed that life adjustment was regulated by six factors associated with positive self-evaluation: stress dissipation, fighting spirit, helplessness/hopelessness, full discussion with doctor about treatment, clarity of thought, and support network size. A model search with covariance structure analysis was conducted. The resulting model was revealed to have a goodness-of-fit index of 0.963, an adjusted goodness-of-fit index of 0.930, a comparative fit index of 0.974, and a root mean square error of approximation of 0.040. The findings suggest that improvements in quality of life can be expected by combining a positive self-evaluation in lung cancer patients and interventions to raise self-adjustment ability with the use of this Model, although it requires further testing. © 2013 Wiley Publishing Asia Pty Ltd.

  15. The development of a risk-adjusted capitation payment system: the Maryland Medicaid model.

    PubMed

    Weiner, J P; Tucker, A M; Collins, A M; Fakhraei, H; Lieberman, R; Abrams, C; Trapnell, G R; Folkemer, J G

    1998-10-01

    This article describes the risk-adjusted payment methodology employed by the Maryland Medicaid program to pay managed care organizations. It also presents an empirical simulation analysis using claims data from 230,000 Maryland Medicaid recipients. This simulation suggests that the new payment model will help adjust for adverse or favorable selection. The article is intended for a wide audience, including state and national policy makers concerned with the design of managed care Medicaid programs and actuaries, analysts, and researchers involved in the design and implementation of risk-adjusted capitation payment systems.

  16. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  17. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  18. Re-estimating temperature-dependent consumption parameters in bioenergetics models for juvenile Chinook salmon

    USGS Publications Warehouse

    Plumb, John M.; Moffitt, Christine M.

    2015-01-01

    Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.

  19. On the hydrologic adjustment of climate-model projections: The potential pitfall of potential evapotranspiration

    USGS Publications Warehouse

    Milly, P.C.D.; Dunne, K.A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.

  20. Bayesian approach to decompression sickness model parameter estimation.

    PubMed

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. On the Hydrologic Adjustment of Climate-Model Projections: The Potential Pitfall of Potential Evapotranspiration

    USGS Publications Warehouse

    Milly, Paul C.D.; Dunne, Krista A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median -11%) caused by the hydrologic model’s apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen–Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors’ findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climate-change impacts on water.

  2. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  3. DINA Model and Parameter Estimation: A Didactic

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2009-01-01

    Cognitive and skills diagnosis models are psychometric models that have immense potential to provide rich information relevant for instruction and learning. However, wider applications of these models have been hampered by their novelty and the lack of commercially available software that can be used to analyze data from this psychometric…

  4. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  5. The combined geodetic network adjusted on the reference ellipsoid - a comparison of three functional models for GNSS observations

    NASA Astrophysics Data System (ADS)

    Kadaj, Roman

    2016-12-01

    The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS

  6. Distributed parameter modeling of repeated truss structures

    NASA Technical Reports Server (NTRS)

    Wang, Han-Ching

    1994-01-01

    A new approach to find homogeneous models for beam-like repeated flexible structures is proposed which conceptually involves two steps. The first step involves the approximation of 3-D non-homogeneous model by a 1-D periodic beam model. The structure is modeled as a 3-D non-homogeneous continuum. The displacement field is approximated by Taylor series expansion. Then, the cross sectional mass and stiffness matrices are obtained by energy equivalence using their additive properties. Due to the repeated nature of the flexible bodies, the mass, and stiffness matrices are also periodic. This procedure is systematic and requires less dynamics detail. The first step involves the homogenization from a 1-D periodic beam model to a 1-D homogeneous beam model. The periodic beam model is homogenized into an equivalent homogeneous beam model using the additive property of compliance along the generic axis. The major departure from previous approaches in literature is using compliance instead of stiffness in homogenization. An obvious justification is that the stiffness is additive at each cross section but not along the generic axis. The homogenized model preserves many properties of the original periodic model.

  7. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    ERIC Educational Resources Information Center

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  8. Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure

    ERIC Educational Resources Information Center

    Vandsburger, Etty; Biggerstaff, Marilyn A.

    2004-01-01

    This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…

  9. A Model of Divorce Adjustment for Use in Family Service Agencies.

    ERIC Educational Resources Information Center

    Faust, Ruth Griffith

    1987-01-01

    Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…

  10. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    ERIC Educational Resources Information Center

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  11. Parameter redundancy in discrete state‐space and integrated models

    PubMed Central

    McCrea, Rachel S.

    2016-01-01

    Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826

  12. Parameter redundancy in discrete state-space and integrated models.

    PubMed

    Cole, Diana J; McCrea, Rachel S

    2016-09-01

    Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Assessment and indirect adjustment for confounding by smoking in cohort studies using relative hazards models.

    PubMed

    Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R

    2014-11-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.

  14. Emotional closeness to parents and grandparents: A moderated mediation model predicting adolescent adjustment.

    PubMed

    Attar-Schwartz, Shalhevet

    2015-09-01

    Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes. (c) 2015 APA, all rights reserved).

  15. A model of parental representations, second individuation, and psychological adjustment in late adolescence.

    PubMed

    Boles, S A

    1999-04-01

    This study examined the role that mental representations and the second individuation process play in adjustment during late adolescence. Participants between the ages of 18 and 22 were used to test a theoretical model exploring the various relationships among the following latent variables: Parental Representations, Psychological Differentiation, Psychological Dependence, Positive Adjustment, and Maladjustment. The results indicated that the quality of parental representations facilitates the second individuation process, which in turn facilitates psychological adjustment in late adolescence. Furthermore, the results indicated that the second individuation process mediates the influence that the quality of parental representations have on psychological adjustment in late adolescence. These findings are discussed in light of previous research in this area, and clinical implications and suggestions for future research are offered.

  16. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  17. Equating Parameter Estimates from the Generalized Graded Unfolding Model.

    ERIC Educational Resources Information Center

    Roberts, James S.

    Three common methods for equating parameter estimates from binary item response theory models are extended to the generalized grading unfolding model (GGUM). The GGUM is an item response model in which single-peaked, nonmonotonic expected value functions are implemented for polytomous responses. GGUM parameter estimates are equated using extended…

  18. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  19. Modeled rapid adjustments in diurnal temperature range response to CO2 and solar forcings

    NASA Astrophysics Data System (ADS)

    Jackson, Lawrence S.; Forster, Piers M.

    2013-03-01

    We used the National Center for Atmospheric Research single column climate model to determine if rapid adjustments to surface heat fluxes contribute to a change in skin surface or surface air diurnal temperature range (DTR) under 2 × CO2 and -2% solar forcings. An ensemble of model runs was employed with locations selected to represent a range of different climatic conditions and with forcing implemented hourly throughout the diurnal cycle. The change in skin surface DTR and surface energy fluxes during the 3 days after forcing were used to quantify the rapid adjustment response and temperature related feedback. Averaged over all locations, skin surface DTR reduced by 0.01°C after CO2 forcing and included a rapid adjustment to skin surface DTR of -0.12°C. Skin surface DTR reduced by 0.17°C after solar forcing and included a rapid adjustment of -0.01°C. The rapid adjustments in skin surface DTR were associated with rapid adjustments in surface sensible and latent heat fluxes necessary to balance the energy budget immediately after forcing. We find that the sensitivity of skin surface DTR to mean temperature related feedback is the same for CO2 and solar forcings when skin surface DTR rapid adjustments are allowed for. Rapid adjustments played a key role in the geographic variation of the skin surface DTR response to forcing. Our results suggest that diurnal variations in trends of downwelling longwave radiation and rapid reductions in DTR associated with CO2 forcing potentially contributed to the observed global trend in surface air DTR.

  20. Parameter estimation of hydrologic models using data assimilation

    NASA Astrophysics Data System (ADS)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  1. Model-Based MR Parameter Mapping with Sparsity Constraints: Parameter Estimation and Performance Bounds

    PubMed Central

    Zhao, Bo; Lam, Fan; Liang, Zhi-Pei

    2014-01-01

    MR parameter mapping (e.g., T1 mapping, T2 mapping, T2∗ mapping) is a valuable tool for tissue characterization. However, its practical utility has been limited due to long data acquisition times. This paper addresses this problem with a new model-based parameter mapping method. The proposed method utilizes a formulation that integrates the explicit signal model with sparsity constraints on the model parameters, enabling direct estimation of the parameters of interest from highly undersampled, noisy k-space data. An efficient greedy-pursuit algorithm is described to solve the resulting constrained parameter estimation problem. Estimation-theoretic bounds are also derived to analyze the benefits of incorporating sparsity constraints and benchmark the performance of the proposed method. The theoretical properties and empirical performance of the proposed method are illustrated in a T2 mapping application example using computer simulations. PMID:24833520

  2. Distributed parameter modeling for the control of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.

    1990-01-01

    The use of FEMs of spacecraft structural dynamics is a common practice, but it has a number of shortcomings. Distributed-parameter models offer an alternative, but present both advantages and difficulties. First, the model order does not have to be reduced prior to the inclusion of control system dynamics. This advantage eliminates the risk involved with model 'order reduction'. Second, distributed parameter models inherently involve fewer parameters, thereby enabling more accurate parameter estimation using experimental data. Third, it is possible to include the damping in the basic model, thereby increasing the accuracy of the structural damping. The difficulty in generating distributed parameter models of complex spacecraft configurations has been greatly alleviated by the use of PDEMOD, BUNVIS-RG, or DISTEL. PDEMOD is being developed for simultaneously modeling structural dynamics and control system dynamics.

  3. Bayesian Estimation in the One-Parameter Latent Trait Model.

    DTIC Science & Technology

    1980-03-01

    parameter, difficulty, is commonly known as the Rasch model or the one-parameter logistic model . For a detailed description of this model and its prop...Journal of Mathematical and Statistical Psychology, 1973, 26, 31-44. (a) Andersen, E. B. A goodness of fit test for the Rasch model . Psychometrika, 1973, 28...Psychometrika, 1972, 37, 29-51. Bock, R. D., & Lieberman, M. Fitting a response model for n dichotomously scored items. Psychometrika, 1970, 35, 179-197

  4. Isolating parameter sensitivity in reach scale transient storage modeling

    NASA Astrophysics Data System (ADS)

    Schmadel, Noah M.; Neilson, Bethany T.; Heavilin, Justin E.; Wörman, Anders

    2016-03-01

    Parameter sensitivity analyses, although necessary to assess identifiability, may not lead to an increased understanding or accurate representation of transient storage processes when associated parameter sensitivities are muted. Reducing the number of uncertain calibration parameters through field-based measurements may allow for more realistic representations and improved predictive capabilities of reach scale stream solute transport. Using a two-zone transient storage model, we examined the spatial detail necessary to set parameters describing hydraulic characteristics and isolate the sensitivity of the parameters associated with transient storage processes. We represented uncertain parameter distributions as triangular fuzzy numbers and used closed form statistical moment solutions to express parameter sensitivity thus avoiding copious model simulations. These solutions also allowed for the direct incorporation of different levels of spatial information regarding hydraulic characteristics. To establish a baseline for comparison, we performed a sensitivity analysis considering all model parameters as uncertain. Next, we set hydraulic parameters as the reach averages, leaving the transient storage parameters as uncertain, and repeated the analysis. Lastly, we incorporated high resolution hydraulic information assessed from aerial imagery to examine whether more spatial detail was necessary to isolate the sensitivity of transient storage parameters. We found that a reach-average hydraulic representation, as opposed to using detailed spatial information, was sufficient to highlight transient storage parameter sensitivity and provide more information regarding the potential identifiability of these parameters.

  5. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  6. Contact angle adjustment in equation-of-state-based pseudopotential model

    NASA Astrophysics Data System (ADS)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  7. Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…

  8. Testing a developmental cascade model of adolescent substance use trajectories and young adult adjustment

    PubMed Central

    LYNNE-LANDSMAN, SARAH D.; BRADSHAW, CATHERINE P.; IALONGO, NICHOLAS S.

    2013-01-01

    Developmental models highlight the impact of early risk factors on both the onset and growth of substance use, yet few studies have systematically examined the indirect effects of risk factors across several domains, and at multiple developmental time points, on trajectories of substance use and adult adjustment outcomes (e.g., educational attainment, mental health problems, criminal behavior). The current study used data from a community epidemiologically defined sample of 678 urban, primarily African American youth, followed from first grade through young adulthood (age 21) to test a developmental cascade model of substance use and young adult adjustment outcomes. Drawing upon transactional developmental theories and using growth mixture modeling procedures, we found evidence for a developmental progression from behavioral risk to adjustment problems in the peer context, culminating in a high-risk trajectory of alcohol, cigarette, and marijuana use during adolescence. Substance use trajectory membership was associated with adjustment in adulthood. These findings highlight the developmental significance of early individual and interpersonal risk factors on subsequent risk for substance use and, in turn, young adult adjustment outcomes. PMID:20883591

  9. Reviews and syntheses: parameter identification in marine planktonic ecosystem modelling

    NASA Astrophysics Data System (ADS)

    Schartau, Markus; Wallhead, Philip; Hemmings, John; Löptien, Ulrike; Kriest, Iris; Krishna, Shubham; Ward, Ben A.; Slawig, Thomas; Oschlies, Andreas

    2017-03-01

    To describe the underlying processes involved in oceanic plankton dynamics is crucial for the determination of energy and mass flux through an ecosystem and for the estimation of biogeochemical element cycling. Many planktonic ecosystem models were developed to resolve major processes so that flux estimates can be derived from numerical simulations. These results depend on the type and number of parameterizations incorporated as model equations. Furthermore, the values assigned to respective parameters specify a model's solution. Representative model results are those that can explain data; therefore, data assimilation methods are utilized to yield optimal estimates of parameter values while fitting model results to match data. Central difficulties are (1) planktonic ecosystem models are imperfect and (2) data are often too sparse to constrain all model parameters. In this review we explore how problems in parameter identification are approached in marine planktonic ecosystem modelling. We provide background information about model uncertainties and estimation methods, and how these are considered for assessing misfits between observations and model results. We explain differences in evaluating uncertainties in parameter estimation, thereby also discussing issues of parameter identifiability. Aspects of model complexity are addressed and we describe how results from cross-validation studies provide much insight in this respect. Moreover, approaches are discussed that consider time- and space-dependent parameter values. We further discuss the use of dynamical/statistical emulator approaches, and we elucidate issues of parameter identification in global biogeochemical models. Our review discloses many facets of parameter identification, as we found many commonalities between the objectives of different approaches, but scientific insight differed between studies. To learn more from results of planktonic ecosystem models we recommend finding a good balance in the level

  10. Automatic parameter adjustment of difference of Gaussian (DoG) filter to improve OT-MACH filter performance for target recognition applications

    NASA Astrophysics Data System (ADS)

    Alkandri, Ahmad; Gardezi, Akber; Bangalore, Nagachetan; Birch, Philip; Young, Rupert; Chatwin, Chris

    2011-11-01

    A wavelet-modified frequency domain Optimal Trade-off Maximum Average Correlation Height (OT-MACH) filter has been trained using 3D CAD models and tested on real target images acquired from a Forward Looking Infra Red (FLIR) sensor. The OT-MACH filter can be used to detect and discriminate predefined targets from a cluttered background. The FLIR sensor extends the filter's ability by increasing the range of detection by exploiting the heat signature differences between the target and the background. A Difference of Gaussians (DoG) based wavelet filter has been use to improve the OT-MACH filter discrimination ability and distortion tolerance. Choosing the right standard deviation values of the two Gaussians comprising the filter is critical. In this paper we present a new technique for auto adjustment of the DoG filter parameters driven by the expected target size. Tests were carried on images acquired by the Apache AH-64 helicopter mounted FLIR sensor, results showing an overall improvement in the recognition of target objects present within the IR images.

  11. Statistical mechanical approaches to models with many poorly known parameters

    NASA Astrophysics Data System (ADS)

    Brown, Kevin S.; Sethna, James P.

    2003-08-01

    Models of biochemical regulation in prokaryotes and eukaryotes, typically consisting of a set of first-order nonlinear ordinary differential equations, have become increasingly popular of late. These systems have large numbers of poorly known parameters, simplified dynamics, and uncertain connectivity: three key features of a class of problems we call sloppy models, which are shared by many other high-dimensional multiparameter nonlinear models. We use a statistical ensemble method to study the behavior of these models, in order to extract as much useful predictive information as possible from a sloppy model, given the available data used to constrain it. We discuss numerical challenges that emerge in using the ensemble method for a large system. We characterize features of sloppy model parameter fluctuations by various spectral decompositions and find indeed that five parameters can be used to fit an elephant. We also find that model entropy is as important to the problem of model choice as model energy is to parameter choice.

  12. Dynamics in the Parameter Space of a Neuron Model

    NASA Astrophysics Data System (ADS)

    Paulo, C. Rech

    2012-06-01

    Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.

  13. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  14. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the

  15. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    SciTech Connect

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  16. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  17. Risk-adjusted outcome models for public mental health outpatient programs.

    PubMed Central

    Hendryx, M S; Dyck, D G; Srebnik, D

    1999-01-01

    OBJECTIVE: To develop and test risk-adjustment outcome models in publicly funded mental health outpatient settings. We developed prospective risk models that used demographic and diagnostic variables; client-reported functioning, satisfaction, and quality of life; and case manager clinical ratings to predict subsequent client functional status, health-related quality of life, and satisfaction with services. DATA SOURCES/STUDY SETTING: Data collected from 289 adult clients at five- and ten-month intervals, from six community mental health agencies in Washington state located primarily in suburban and rural areas. Data sources included client self-report, case manager ratings, and management information system data. STUDY DESIGN: Model specifications were tested using prospective linear regression analyses. Models were validated in a separate sample and comparative agency performance examined. PRINCIPAL FINDINGS: Presence of severe diagnoses, substance abuse, client age, and baseline functional status and quality of life were predictive of mental health outcomes. Unadjusted versus risk-adjusted scores resulted in differently ranked agency performance. CONCLUSIONS: Risk-adjusted functional status and patient satisfaction outcome models can be developed for public mental health outpatient programs. Research is needed to improve the predictive accuracy of the outcome models developed in this study, and to develop techniques for use in applied settings. The finding that risk adjustment changes comparative agency performance has important consequences for quality monitoring and improvement. Issues in public mental health risk adjustment are discussed, including static versus dynamic risk models, utilization versus outcome models, choice and timing of measures, and access and quality improvement incentives. PMID:10201857

  18. A local adjustment strategy for the initialization of dynamic causal modelling to infer effective connectivity in brain epileptic structures.

    PubMed

    Xiang, Wentao; Karfoul, Ahmad; Shu, Huazhong; Le Bouquin Jeannès, Régine

    2017-03-07

    This paper addresses the question of effective connectivity in the human cerebral cortex in the context of epilepsy. Among model based approaches to infer brain connectivity, spectral Dynamic Causal Modelling is a conventional technique for which we propose an alternative to estimate cross spectral density. The proposed strategy we investigated tackles the sub-estimation of the free energy using the well-known variational Expectation-Maximization algorithm highly sensitive to the initialization of the parameters vector by a permanent local adjustment of the initialization process. The performance of the proposed strategy in terms of effective connectivity identification is assessed using simulated data generated by a neuronal mass model (simulating unidirectional and bidirectional flows) and real epileptic intracerebral Electroencephalographic signals. Results show the efficiency of proposed approach compared to the conventional Dynamic Causal Modelling and the one wherein a deterministic annealing scheme is employed.

  19. Statistical Parameters for Describing Model Accuracy

    DTIC Science & Technology

    1989-03-20

    mean and the standard deviation, approximately characterizes the accuracy of the model, since the width of the confidence interval whose center is at...Using a modified version of Chebyshev’s inequality, a similar result is obtained for the upper bound of the confidence interval width for any

  20. Modified rapid shallow breathing index adjusted with anthropometric parameters increases predictive power for extubation failure compared with the unmodified index in postcardiac surgery patients.

    PubMed

    Takaki, Shunsuke; Kadiman, Suhaini Bin; Tahir, Sharifah Suraya; Ariff, M Hassan; Kurahashi, Kiyoyasu; Goto, Takahisa

    2015-02-01

    The aim of this study was to determine the best predictors of successful extubation after cardiac surgery, by modifying the rapid shallow breathing index (RSBI) based on patients' anthropometric parameters. Single-center prospective observational study. Two general intensive care units at a single research institute. Patients who had undergone uncomplicated cardiac surgery. None. The following parameters were investigated in conjunction with modification of the RSBI: Actual body weight (ABW), predicted body weight, ideal body weight, body mass index (BMI), and body surface area. Using the first set of patient data, RSBI threshold and modified RSBI for extubation failure were determined (threshold value; RSBI: 77 breaths/min (bpm)/L, RSBI adjusted with ABW: 5.0 bpm×kg/mL, RSBI adjusted with BMI: 2.0 bpm×BMI/mL). These threshold values for RSBI and RSBI adjusted with ABW or BMI were validated using the second set of patient data. Sensitivity values for RSBI, RSBI modified with ABW, and RSBI modified with BMI were 91%, 100%, and 100%, respectively. The corresponding specificity values were 89%, 92%, and 93%, and the corresponding receiver operator characteristic values were 0.951, 0.977, and 0.980, respectively. Modified RSBI adjusted based on ABW or BMI has greater predictive power than conventional RSBI. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  2. A simulation of water pollution model parameter estimation

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  3. A Logical Difficulty of the Parameter Setting Model.

    ERIC Educational Resources Information Center

    Sasaki, Yoshinori

    1990-01-01

    Seeks to prove that the parameter setting model (PSM) of Chomsky's Universal Grammar theory contains an internal contradiction when it is seriously taken to model the internal state of language learners. (six references) (JL)

  4. A Logical Difficulty of the Parameter Setting Model.

    ERIC Educational Resources Information Center

    Sasaki, Yoshinori

    1990-01-01

    Seeks to prove that the parameter setting model (PSM) of Chomsky's Universal Grammar theory contains an internal contradiction when it is seriously taken to model the internal state of language learners. (six references) (JL)

  5. Determining extreme parameter correlation in ground water models.

    USGS Publications Warehouse

    Hill, M.C.; Osterby, O.

    2003-01-01

    In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation can go undetected even by experienced modelers. Extreme parameter correlation can be detected using parameter correlation coefficients, but their utility depends on the presence of sufficient, but not excessive, numerical imprecision of the sensitivities, such as round-off error. This work investigates the information that can be obtained from parameter correlation coefficients in the presence of different levels of numerical imprecision, and compares it to the information provided by an alternative method called the singular value decomposition (SVD). Results suggest that (1) calculated correlation coefficients with absolute values that round to 1.00 were good indicators of extreme parameter correlation, but smaller values were not necessarily good indicators of lack of correlation and resulting unique parameter estimates; (2) the SVD may be more difficult to interpret than parameter correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters were more equally sensitive. When the statistical measures fail, parameter correlation can be identified only by the tedious process of executing regression using different sets of starting values, or, in some circumstances, through graphs of the objective function.

  6. Solar Parameters for Modeling the Interplanetary Background

    NASA Astrophysics Data System (ADS)

    Bzowski, Maciej; Sokół, Justyna M.; Tokumaru, Munetoshi; Fujiki, Kenichi; Quémerais, Eric; Lallement, Rosine; Ferron, Stéphane; Bochsler, Peter; McComas, David J.

    The goal of the working group on cross-calibration of past and present ultraviolet (UV) datasets of the International Space Science Institute (ISSI) in Bern, Switzerland was to establish a photometric cross-calibration of various UV and extreme ultraviolet (EUV) heliospheric observations. Realization of this goal required a credible and up-to-date model of the spatial distribution of neutral interstellar hydrogen in the heliosphere, and to that end, a credible model of the radiation pressure and ionization processes was needed. This chapter describes the latter part of the project: the solar factors responsible for shaping the distribution of neutral interstellar H in the heliosphere. In this paper we present the solar Lyman-α flux and the topics of solar Lyman-α resonant radiation pressure force acting on neutral H atoms in the heliosphere. We will also discuss solar EUV radiation and resulting photoionization of heliospheric hydrogen along with their evolution in time and the still hypothetical variation with heliolatitude. Furthermore, solar wind and its evolution with solar activity is presented, mostly in the context of charge exchange ionization of heliospheric neutral hydrogen, and dynamic pressure variations. Also electron-impact ionization of neutral heliospheric hydrogen and its variation with time, heliolatitude, and solar distance is discussed. After a review of the state of the art in all of those topics, we proceed to present an interim model of the solar wind and the other solar factors based on up-to-date in situ and remote sensing observations. This model was used by Izmodenov et al. (2013, this volume) to calculate the distribution of heliospheric hydrogen, which in turn was the basis for intercalibrating the heliospheric UV and EUV measurements discussed in Quémerais et al. (2013, this volume). Results of this joint effort will also be used to improve the model of the solar wind evolution, which will be an invaluable asset in interpretation of

  7. Testing a Social Ecological Model for Relations between Political Violence and Child Adjustment in Northern Ireland

    PubMed Central

    Cummings, E. Mark; Merrilees, Christine E.; Schermerhorn, Alice C.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2013-01-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family and child psychological processes in child adjustment, supporting study of inter-relations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M= 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland completed measures of community discord, family relations, and children’s regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members’ reports of current sectarian and non-sectarian antisocial behavior. Interparental conflict and parental monitoring and children’s emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children’s adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world. PMID:20423550

  8. Testing a social ecological model for relations between political violence and child adjustment in Northern Ireland.

    PubMed

    Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-05-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.

  9. WATGIS: A GIS-Based Lumped Parameter Water Quality Model

    Treesearch

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2002-01-01

    A Geographic Information System (GIS)­based, lumped parameter water quality model was developed to estimate the spatial and temporal nitrogen­loading patterns for lower coastal plain watersheds in eastern North Carolina. The model uses a spatially distributed delivery ratio (DR) parameter to account for nitrogen retention or loss along a drainage network. Delivery...

  10. Exploring the interdependencies between parameters in a material model.

    SciTech Connect

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  11. On Interpreting the Parameters for Any Item Response Model

    ERIC Educational Resources Information Center

    Thissen, David

    2009-01-01

    Maris and Bechger's article is an exercise in technical virtuosity and provides much to be learned by students of psychometrics. In this commentary, the author begins with making two observations. The first is that the title, "On Interpreting the Model Parameters for the Three Parameter Logistic Model," belies the generality of parts of Maris and…

  12. On Interpreting the Parameters for Any Item Response Model

    ERIC Educational Resources Information Center

    Thissen, David

    2009-01-01

    Maris and Bechger's article is an exercise in technical virtuosity and provides much to be learned by students of psychometrics. In this commentary, the author begins with making two observations. The first is that the title, "On Interpreting the Model Parameters for the Three Parameter Logistic Model," belies the generality of parts of Maris and…

  13. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    PubMed

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.

  14. Improving Model Performance by Dynamically Varying the Sensitive Parameters.

    NASA Astrophysics Data System (ADS)

    Girija, L.; Sudheer, K.

    2016-12-01

    Distributed hydrologic models are being extensively used for watershed modeling these days as they can account the heterogeneity of the catchment to a great extent. The large number of parameters that need to be calibrated, however, increases the complexity of the modelling. The various parameters may be representing a physical process that can have seasonal variability. However, most of the hydrologic models assume temporal invariability of the model parameters and their respective optimal values. This assumption, though, helps in reducing the computational burden, may compromise the ability of the model to extract information from the observed data having seasonal changes. In addition, the model may not simulate acceptable watershed responses during wet and dry seasons of the year if the same value for parameters is employed. Consequently, this study aims at considering varying values for parameters along the seasons in an year, and evaluate the performance. A demonstrative case study is conducted using SWAT model considering the data pertaining to Mississinewa watershed, Indiana, USA. The seasonally sensitive parameters of the model are identified, and are varied during the four different seasons of the year considering the antecedent moisture conditions. The model is expected to simulate the high and low flows more accurately compared to the conventional method of using a single set of parameter values for the entire simulation.

  15. Validating Mechanistic Sorption Model Parameters and Processes for Reactive Transport in Alluvium

    SciTech Connect

    Zavarin, M; Roberts, S K; Rose, T P; Phinney, D L

    2002-05-02

    The laboratory batch and flow-through experiments presented in this report provide a basis for validating the mechanistic surface complexation and ion exchange model we use in our hydrologic source term (HST) simulations. Batch sorption experiments were used to examine the effect of solution composition on sorption. Flow-through experiments provided for an analysis of the transport behavior of sorbing elements and tracers which includes dispersion and fluid accessibility effects. Analysis of downstream flow-through column fluids allowed for evaluation of weakly-sorbing element transport. Secondary Ion Mass Spectrometry (SIMS) analysis of the core after completion of the flow-through experiments permitted the evaluation of transport of strongly sorbing elements. A comparison between these data and model predictions provides additional constraints to our model and improves our confidence in near-field HST model parameters. In general, cesium, strontium, samarium, europium, neptunium, and uranium behavior could be accurately predicted using our mechanistic approach but only after some adjustment was made to the model parameters. The required adjustments included a reduction in strontium affinity for smectite, an increase in cesium affinity for smectite and illite, a reduction in iron oxide and calcite reactive surface area, and a change in clinoptilolite reaction constants to reflect a more recently published set of data. In general, these adjustments are justifiable because they fall within a range consistent with our understanding of the parameter uncertainties. These modeling results suggest that the uncertainty in the sorption model parameters must be accounted for to validate the mechanistic approach. The uncertainties in predicting the sorptive behavior of U-1a and UE-5n alluvium also suggest that these uncertainties must be propagated to nearfield HST and large-scale corrective action unit (CAU) models.

  16. The research of hyperspectral image EBCOT lossless compression coding technology based on inter-spectrum adjustable parameter matrix reversible transform and intra-frame IDWT

    NASA Astrophysics Data System (ADS)

    Xie, Cheng-jun; Wei, Ying; Bi, Xin-wen; Li, Hui-zhu

    2009-10-01

    This paper presents a new reversible transform of inter-spectrum adjustable parameter matrix, which has better redundancy elimination effect by adjusting magnitude parameter λ and shift parameter δ to adjust transform matrix. Intraframe redundancy is eliminated by integer discrete wavelet transform (IDWT).These two kinds of transforms are all completed entirely by addition and shift, whose fast operation speed makes hardware implementation easier. After interspectrum and intra-frame transform, the hyper-spectral image is coded by improved EBCOT algorithm. Using hyper-spectral images Canal shot by AVIRIS of American JPL laboratory as test images, the experimental results show that in lossless image compression applications the method proposed in this paper is much better than the research results of MST, NIMST, a research team of Chinese Academy of Sciences, DPCMARJ, WinZip and JPEG-LS. The condition in which λ=7 and δ=3 in this paper, on the average the compression ratio using this algorithm increases by 11%, 15%, 18%, 31%, 36%, 38% and 43% respectively compared to the above algorithms. From the foregoing it follows that the algorithm presented in this paper is a very good hyper-spectral image lossless compression coding algorithm.

  17. Transferability of calibrated microsimulation model parameters for safety assessment using simulated conflicts.

    PubMed

    Essa, Mohamed; Sayed, Tarek

    2015-11-01

    Several studies have investigated the relationship between field-measured conflicts and the conflicts obtained from micro-simulation models using the Surrogate Safety Assessment Model (SSAM). Results from recent studies have shown that while reasonable correlation between simulated and real traffic conflicts can be obtained especially after proper calibration, more work is still needed to confirm that simulated conflicts provide safety measures beyond what can be expected from exposure. As well, the results have emphasized that using micro-simulation model to evaluate safety without proper model calibration should be avoided. The calibration process adjusts relevant simulation parameters to maximize the correlation between field-measured and simulated conflicts. The main objective of this study is to investigate the transferability of calibrated parameters of the traffic simulation model (VISSIM) for safety analysis between different sites. The main purpose is to examine whether the calibrated parameters, when applied to other sites, give reasonable results in terms of the correlation between the field-measured and the simulated conflicts. Eighty-three hours of video data from two signalized intersections in Surrey, BC were used in this study. Automated video-based computer vision techniques were used to extract vehicle trajectories and identify field-measured rear-end conflicts. Calibrated VISSIM parameters obtained from the first intersection which maximized the correlation between simulated and field-observed conflicts were used to estimate traffic conflicts at the second intersection and to compare the results to parameters optimized specifically for the second intersection. The results show that the VISSIM parameters are generally transferable between the two locations as the transferred parameters provided better correlation between simulated and field-measured conflicts than using the default VISSIM parameters. Of the six VISSIM parameters identified as

  18. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  19. Towards an Integrated Conceptual Model of International Student Adjustment and Adaptation

    ERIC Educational Resources Information Center

    Schartner, Alina; Young, Tony Johnstone

    2016-01-01

    Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…

  20. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  1. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    ERIC Educational Resources Information Center

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  2. A Study of Perfectionism, Attachment, and College Student Adjustment: Testing Mediational Models.

    ERIC Educational Resources Information Center

    Hood, Camille A.; Kubal, Anne E.; Pfaller, Joan; Rice, Kenneth G.

    Mediational models predicting college students' adjustment were tested using regression analyses. Contemporary adult attachment theory was employed to explore the cognitive/affective mechanisms by which adult attachment and perfectionism affect various aspects of psychological functioning. Consistent with theoretical expectations, results…

  3. Towards an Integrated Conceptual Model of International Student Adjustment and Adaptation

    ERIC Educational Resources Information Center

    Schartner, Alina; Young, Tony Johnstone

    2016-01-01

    Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…

  4. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  5. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    SciTech Connect

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming; Carnes, Brian; Chen, Ken Shuang

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated in order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.

  6. State and Parameter Estimation for a Coupled Ocean--Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Ghil, M.; Kondrashov, D.; Sun, C.

    2006-12-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  7. Two Models of Caregiver Strain and Bereavement Adjustment: A Comparison of Husband and Daughter Caregivers of Breast Cancer Hospice Patients

    ERIC Educational Resources Information Center

    Bernard, Lori L.; Guarnaccia, Charles A.

    2003-01-01

    Purpose: Caregiver bereavement adjustment literature suggests opposite models of impact of role strain on bereavement adjustment after care-recipient death--a Complicated Grief Model and a Relief Model. This study tests these competing models for husband and adult-daughter caregivers of breast cancer hospice patients. Design and Methods: This…

  8. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables

    PubMed Central

    Abad, Cesar C. C.; Barros, Ronaldo V.; Bertuzzi, Romulo; Gagliardi, João F. L.; Lima-Silva, Adriano E.; Lambert, Mike I.

    2016-01-01

    Abstract The aim of this study was to verify the power of VO2max, peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO2max and PTV; 2) a constant submaximal run at 12 km·h−1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO2max, PTV and RE) and adjusted variables (VO2max0.72, PTV0.72 and RE0.60) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO2max. Significant correlations (p < 0.01) were found between 10 km running time and adjusted and unadjusted RE and PTV, providing models with effect size > 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV0.72 and RE0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation. PMID:28149382

  9. Relationship between Cole-Cole model parameters and spectral decomposition parameters derived from SIP data

    NASA Astrophysics Data System (ADS)

    Weigand, M.; Kemna, A.

    2016-06-01

    Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.

  10. Identification of hydrological model parameter variation using ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao

    2016-12-01

    Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.

  11. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  12. Universally Sloppy Parameter Sensitivities in Systems Biology Models

    PubMed Central

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-01-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568

  13. Universally sloppy parameter sensitivities in systems biology models.

    PubMed

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  14. Improving the global applicability of the RUSLE model - adjustment of the topographical and rainfall erosivity factors

    NASA Astrophysics Data System (ADS)

    Naipal, V.; Reick, C.; Pongratz, J.; Van Oost, K.

    2015-09-01

    Large uncertainties exist in estimated rates and the extent of soil erosion by surface runoff on a global scale. This limits our understanding of the global impact that soil erosion might have on agriculture and climate. The Revised Universal Soil Loss Equation (RUSLE) model is, due to its simple structure and empirical basis, a frequently used tool in estimating average annual soil erosion rates at regional to global scales. However, large spatial-scale applications often rely on coarse data input, which is not compatible with the local scale on which the model is parameterized. Our study aims at providing the first steps in improving the global applicability of the RUSLE model in order to derive more accurate global soil erosion rates. We adjusted the topographical and rainfall erosivity factors of the RUSLE model and compared the resulting erosion rates to extensive empirical databases from the USA and Europe. By scaling the slope according to the fractal method to adjust the topographical factor, we managed to improve the topographical detail in a coarse resolution global digital elevation model. Applying the linear multiple regression method to adjust rainfall erosivity for various climate zones resulted in values that compared well to high resolution erosivity data for different regions. However, this method needs to be extended to tropical climates, for which erosivity is biased due to the lack of high resolution erosivity data. After applying the adjusted and the unadjusted versions of the RUSLE model on a global scale we find that the adjusted version shows a global higher mean erosion rate and more variability in the erosion rates. Comparison to empirical data sets of the USA and Europe shows that the adjusted RUSLE model is able to decrease the very high erosion rates in hilly regions that are observed in the unadjusted RUSLE model results. Although there are still some regional differences with the empirical databases, the results indicate that the

  15. Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics

    PubMed Central

    Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna

    2011-01-01

    A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157

  16. Using bias-adjustment to improve the use of high-resolution climate model output

    NASA Astrophysics Data System (ADS)

    Corney, S.; Holz, G.; Bennett, J.; Grose, M.; White, C. J.; Gaynor, S.; Bindoff, N. L.

    2010-09-01

    , climate variables were bias-adjusted against Australian Water Availability Project data over the period 1961-2007. Daily rainfall, minimum and maximum temperature, evaporation and solar radiation were adjusted using differences in one percentile bins for six models and four seasons for each 0.1° grid cell. The adjustments (1961-2007) were then applied to the climate model variables over the period 1961-2100. This technique assumes the 1961-2007 adjustments are maintained into the future. The technique aims to preserve the statistical characteristics of the original model output (the pdf) without changing the trends and variability (the climate change signature) of the data. Bias-adjusted model output can then be input directly into conceptual models. We present the bias-adjustment method and example results where bias-adjusted climate model output has been used as input into biophysical and hydrological models that have been calibrated against an observational record.

  17. The definition of hydrologic model parameters using remote sensing techniques

    NASA Technical Reports Server (NTRS)

    Ragan, R. M.; Salomonson, V. V.

    1978-01-01

    The reported investigation is concerned with the use of Landsat remote sensing to define input parameters for an array of hydrologic models which are used to synthesize streamflow and water quality parameters in the planning or management process. The ground truth sampling and problems involved in translating the remotely sensed information into hydrologic model parameters are discussed. Questions related to the modification of existing models for compatibility with remote sensing capabilities are also examined. It is shown that the input parameters of many models are presently overdefined in terms of the sensitivity and accuracy of the model. When this overdefinition is recognized many of the models currently considered to be incompatible with remote sensing capabilities can be modified to make possible use with sensors having rather low resolutions.

  18. State and parameter estimation for canonic models of neural oscillators.

    PubMed

    Tyukin, Ivan; Steur, Erik; Nijmeijer, Henk; Fairhurst, David; Song, Inseon; Semyanov, Alexey; Van Leeuwen, Cees

    2010-06-01

    We consider the problem of how to recover the state and parameter values of typical model neurons, such as Hindmarsh-Rose, FitzHugh-Nagumo, Morris-Lecar, from in-vitro measurements of membrane potentials. In control theory, in terms of observer design, model neurons qualify as locally observable. However, unlike most models traditionally addressed in control theory, no parameter-independent diffeomorphism exists, such that the original model equations can be transformed into adaptive canonic observer form. For a large class of model neurons, however, state and parameter reconstruction is possible nevertheless. We propose a method which, subject to mild conditions on the richness of the measured signal, allows model parameters and state variables to be reconstructed up to an equivalence class.

  19. Evaluation of the storage function model parameter characteristics

    NASA Astrophysics Data System (ADS)

    Sugiyama, Hironobu; Kadoya, Mutsumi; Nagai, Akihiro; Lansey, Kevin

    1997-04-01

    The storage function hydrograph model is one of the most commonly used models for flood runoff analysis in Japan. This paper studies the generality of the approach and its application to Japanese basins. Through a comparison of the basic equations for the models, the storage function model parameters, K, P, and T1, are shown to be related to the terms, k and p, in the kinematic wave model. This analysis showed that P and p are identical and K and T1 can be related to k, the basin area and its land use. To apply the storage function model throughout Japan, regional parameter relationships for K and T1 were developed for different land-use conditions using data from 22 watersheds and 91 flood events. These relationships combine the kinematic wave parameters with general topographic information using Hack's Law. The sensitivity of the parameters and their physical significance are also described.

  20. Extraction of exposure modeling parameters of thick resist

    NASA Astrophysics Data System (ADS)

    Liu, Chi; Du, Jinglei; Liu, Shijie; Duan, Xi; Luo, Boliang; Zhu, Jianhua; Guo, Yongkang; Du, Chunlei

    2004-12-01

    Experimental and theoretical analysis indicates that many nonlinear factors existing in the exposure process of thick resist can remarkably affect the PAC concentration distribution in the resist. So the effects should be fully considered in the exposure model of thick resist, and exposure parameters should not be treated as constants because there exists certain relationship between the parameters and resist thickness. In this paper, an enhanced Dill model for the exposure process of thick resist is presented, and the experimental setup for measuring exposure parameters of thick resist is developed. We measure the intensity transmittance curve of thick resist AZ4562 under different processing conditions, and extract the corresponding exposure parameters based on the experiment results and the calculations from the beam propagation matrix of the resist films. With these modified modeling parameters and enhanced Dill model, simulation of thick-resist exposure process can be effectively developed in the future.

  1. Parameter-adjusted stochastic resonance system for the aperiodic echo chirp signal in optimal FrFT domain

    NASA Astrophysics Data System (ADS)

    Lin, Li-feng; Yu, Lei; Wang, Huiqi; Zhong, Suchuan

    2017-02-01

    In order to improve the system performance for moving target detection and localization, this paper presents a new aperiodic chirp signal and additive noise driving stochastic dynamical system, in which the internal frequency has the linear variation matching with the driving frequency. By using the fractional Fourier transform (FrFT) operator with the optimal order, the proposed time-domain dynamical system is transformed into the equivalent FrFT-domain system driven by the periodic signal and noise. Therefore, system performance is conveniently analyzed from the view of output signal-to-noise ratio (SNR) in optimal FrFT domain. Simulation results demonstrate that the output SNR, as a function of system parameter, shows the different generalized SR behaviors in the case of various internal parameters of driving chirp signal and external parameters of the moving target.

  2. A one parameter class of fractional Maxwell-like models

    NASA Astrophysics Data System (ADS)

    Colombaro, Ivano; Giusti, Andrea; Mainardi, Francesco

    2017-06-01

    In this paper we discuss a one parameter modification of the well known fractional Maxwell model of viscoelasticity. Such models appear to be particularly interesting because they describe the short time asymptotic limit of a more general class of viscoelastic models known in the literature as Bessel models.

  3. Identification of parameters of discrete-continuous models

    SciTech Connect

    Cekus, Dawid Warys, Pawel

    2015-03-10

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.

  4. Inverse estimation of parameters for an estuarine eutrophication model

    SciTech Connect

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.

  5. An appraisal-based coping model of attachment and adjustment to arthritis.

    PubMed

    Sirois, Fuschia M; Gick, Mary L

    2016-05-01

    Guided by pain-related attachment models and coping theory, we used structural equation modeling to test an appraisal-based coping model of how insecure attachment was linked to arthritis adjustment in a sample of 365 people with arthritis. The structural equation modeling analyses revealed indirect and direct associations of anxious and avoidant attachment with greater appraisals of disease-related threat, less perceived social support to deal with this threat, and less coping efficacy. There was evidence of reappraisal processes for avoidant but not anxious attachment. Findings highlight the importance of considering attachment style when assessing how people cope with the daily challenges of arthritis.

  6. Modelling the rate of change in a longitudinal study with missing data, adjusting for contact attempts.

    PubMed

    Akacha, Mouna; Hutton, Jane L

    2011-05-10

    The Collaborative Ankle Support Trial (CAST) is a longitudinal trial of treatments for severe ankle sprains in which interest lies in the rate of improvement, the effectiveness of reminders and potentially informative missingness. A model is proposed for continuous longitudinal data with non-ignorable or informative missingness, taking into account the nature of attempts made to contact initial non-responders. The model combines a non-linear mixed model for the outcome model with logistic regression models for the reminder processes. A sensitivity analysis is used to contrast this model with the traditional selection model, where we adjust for missingness by modelling the missingness process. The conclusions that recovery is slower, and less satisfactory with age and more rapid with below knee cast than with a tubular bandage do not alter materially across all models investigated. The results also suggest that phone calls are most effective in retrieving questionnaires.

  7. Toward predictive food process models: A protocol for parameter estimation.

    PubMed

    Vilas, Carlos; Arias-Méndez, Ana; García, Míriam R; Alonso, Antonio A; Balsa-Canto, E

    2016-05-31

    Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.

  8. Variability in sensitivity of catchment model parameters in Austria

    NASA Astrophysics Data System (ADS)

    Sleziak, Patrik; Parajka, Juraj; Hlavčová, Kamila; Szolgay, Ján

    2017-04-01

    The main objective of this study is to assess the spatial and temporal changes in sensitivity of model parameters in Austria. The sensitivity is assessed for a conceptual rainfall-runoff model (TUW model) by using Latin hypercube sampling method. The spatial variability in sensitivity of model parameters is evaluated over 213 Austrian basins. The temporal variability is compared for three 10-year periods from 1981-2010. The results show two distinct regions with different sensitivity of model parameters. In the alpine and flatlands regions, the most sensitive are parameters related to snow (degree-day melt factor) and soil (maximum soil field capacity) processes, respectively. The evaluation of temporal variability indicates that despite some changes in climate characteristics over the analyzed decades (i.e. a clear increase in air temperature and precipitation), the sensitivity changes are not large. Our contribution will discuss the factors that control the temporal stability of sensitivity changes in Austria.

  9. Model and Parameter Discretization Impacts on Estimated ASR Recovery Efficiency

    NASA Astrophysics Data System (ADS)

    Forghani, A.; Peralta, R. C.

    2015-12-01

    We contrast computed recovery efficiency of one Aquifer Storage and Recovery (ASR) well using several modeling situations. Test situations differ in employed finite difference grid discretization, hydraulic conductivity, and storativity. We employ a 7-layer regional groundwater model calibrated for Salt Lake Valley. Since the regional model grid is too coarse for ASR analysis, we prepare two local models with significantly smaller discretization capable of analyzing ASR recovery efficiency. Some addressed situations employ parameters interpolated from the coarse valley model. Other situations employ parameters derived from nearby well logs or pumping tests. The intent of the evaluations and subsequent sensitivity analysis is to show how significantly the employed discretization and aquifer parameters affect estimated recovery efficiency. Most of previous studies to evaluate ASR recovery efficiency only consider hypothetical uniform specified boundary heads and gradient assuming homogeneous aquifer parameters. The well is part of the Jordan Valley Water Conservancy District (JVWCD) ASR system, that lies within Salt Lake Valley.

  10. A dimensionless parameter model for arc welding processes

    SciTech Connect

    Fuerschbach, P.W.

    1994-12-31

    A dimensionless parameter model previously developed for C0{sub 2} laser beam welding has been shown to be applicable to GTAW and PAW autogenous arc welding processes. The model facilitates estimates of weld size, power, and speed based on knowledge of the material`s thermal properties. The dimensionless parameters can also be used to estimate the melting efficiency, which eases development of weld schedules with lower heat input to the weldment. The mathematical relationship between the dimensionless parameters in the model has been shown to be dependent on the heat flow geometry in the weldment.

  11. Estimation of the input parameters in the Feller neuronal model

    NASA Astrophysics Data System (ADS)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  12. Computationally Inexpensive Identification of Non-Informative Model Parameters

    NASA Astrophysics Data System (ADS)

    Mai, J.; Cuntz, M.; Kumar, R.; Zink, M.; Samaniego, L. E.; Schaefer, D.; Thober, S.; Rakovec, O.; Musuuza, J. L.; Craven, J. R.; Spieler, D.; Schrön, M.; Prykhodko, V.; Dalmasso, G.; Langenberg, B.; Attinger, S.

    2014-12-01

    Sensitivity analysis is used, for example, to identify parameters which induce the largest variability in model output and are thus informative during calibration. Variance-based techniques are employed for this purpose, which unfortunately require a large number of model evaluations and are thus ineligible for complex environmental models. We developed, therefore, a computational inexpensive screening method, which is based on Elementary Effects, that automatically separates informative and non-informative model parameters. The method was tested using the mesoscale hydrologic model (mHM) with 52 parameters. The model was applied in three European catchments with different hydrological characteristics, i.e. Neckar (Germany), Sava (Slovenia), and Guadalquivir (Spain). The method identified the same informative parameters as the standard Sobol method but with less than 1% of model runs. In Germany and Slovenia, 22 of 52 parameters were informative mostly in the formulations of evapotranspiration, interflow and percolation. In Spain 19 of 52 parameters were informative with an increased importance of soil parameters. We showed further that Sobol' indexes calculated for the subset of informative parameters are practically the same as Sobol' indexes before the screening but the number of model runs was reduced by more than 50%. The model mHM was then calibrated twice in the three test catchments. First all 52 parameters were taken into account and then only the informative parameters were calibrated while all others are kept fixed. The Nash-Sutcliffe efficiencies were 0.87 and 0.83 in Germany, 0.89 and 0.88 in Slovenia, and 0.86 and 0.85 in Spain, respectively. This minor loss of at most 4% in model performance comes along with a substantial decrease of at least 65% in model evaluations. In summary, we propose an efficient screening method to identify non-informative model parameters that can be discarded during further applications. We have shown that sensitivity

  13. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    DOE PAGES

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; ...

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less

  14. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    SciTech Connect

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.

  15. Field measurements and neural network modeling of water quality parameters

    NASA Astrophysics Data System (ADS)

    Qishlaqi, Afishin; Kordian, Sediqeh; Parsaie, Abbas

    2017-01-01

    Rivers are one of the main resources for water supplying the agricultural, industrial, and urban use; therefore, unremitting surveying the quality of them is necessary. Recently, artificial neural networks have been proposed as a powerful tool for modeling and predicting the water quality parameters in natural streams. In this paper, to predict water quality parameters of Tireh River located at South West of Iran, a multilayer neural network model (MLP) was developed. The T.D.S, Ec, pH, HCO3, Cl, Na, So4, Mg, and Ca as main parameters of water quality parameters were measured and predicted using the MLP model. The architecture of the proposed MLP model included two hidden layers that at the first and second hidden layers, eight and six neurons were considered. The tangent sigmoid and pure-line functions were selected as transfer function for the neurons in hidden and output layers, respectively. The results showed that the MLP model has suitable performance to predict water quality parameters of Tireh River. For assessing the performance of the MLP model in the water quality prediction along the studied area, in addition to existing sampling stations, another 14 stations along were considered by authors. Evaluating the performance of developed MLP model to map relation between the water quality parameters along the studied area showed that it has suitable accuracy and minimum correlation between the results of MLP model and measured data was 0.85.

  16. Estimating winter wheat phenological parameters: Implications for crop modeling

    USDA-ARS?s Scientific Manuscript database

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  17. Crop parameters for modeling sugarcane under rainfed conditions in Mexico

    USDA-ARS?s Scientific Manuscript database

    Crop models with well-tested parameters can improve sugarcane productivity for food and biofuel generation. This study aimed to (i) calibrate the light extinction coefficient (k) and other crop parameters for the sugarcane cultivar CP 72-2086, an early-maturing cultivar grown in Mexico and many oth...

  18. Analysis of the Second Model Parameter Estimation Experiment Workshop Results

    NASA Astrophysics Data System (ADS)

    Duan, Q.; Schaake, J.; Koren, V.; Mitchell, K.; Lohmann, D.

    2002-05-01

    The goal of Model Parameter Estimation Experiment (MOPEX) is to investigate techniques for the a priori parameter estimation for land surface parameterization schemes of atmospheric models and for hydrologic models. A comprehensive database has been developed which contains historical hydrometeorologic time series data and land surface characteristics data for 435 basins in the United States and many international basins. A number of international MOPEX workshops have been convened or planned for MOPEX participants to share their parameter estimation experience. The Second International MOPEX Workshop is held in Tucson, Arizona, April 8-10, 2002. This paper presents the MOPEX goal/objectives and science strategy. Results from our participation in developing and testing of the a priori parameter estimation procedures for the National Weather Service (NWS) Sacramento Soil Moisture Accounting (SAC-SMA) model, the Simple Water Balance (SWB) model, and the National Center for Environmental Prediction Center (NCEP) NOAH Land Surface Model (NOAH LSM) are highlighted. The test results will include model simulations using both a priori parameters and calibrated parameters for 12 basins selected for the Tucson MOPEX Workshop.

  19. Retrospective forecast of ETAS model with daily parameters estimate

    NASA Astrophysics Data System (ADS)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  20. Effect of Noise in the Three-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    In a preceding research report, ONR/RR-82-1 (Information Loss Caused by Noise in Models for Dichotomous Items), observations were made on the effect of noise accommodated in different types of models on the dichotomous response level. In the present paper, focus is put upon the three-parameter logistic model, which is widely used among…

  1. Effect of Noise in the Three-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    In a preceding research report, ONR/RR-82-1 (Information Loss Caused by Noise in Models for Dichotomous Items), observations were made on the effect of noise accommodated in different types of models on the dichotomous response level. In the present paper, focus is put upon the three-parameter logistic model, which is widely used among…

  2. REVIEW OF INDOOR EMISSION SOURCE MODELS: PART 2. PARAMETER ESTIMATION

    EPA Science Inventory

    This review consists of two sections. Part I provides an overview of 46 indoor emission source models. Part 2 (this paper) focuses on parameter estimation, a topic that is critical to modelers but has never been systematically discussed. A perfectly valid model may not be a usefu...

  3. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  4. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  5. Lumped Parameter Model (LPM) for Light-Duty Vehicles

    EPA Pesticide Factsheets

    EPA’s Lumped Parameter Model (LPM) is a free, desktop computer application that estimates the effectiveness (CO2 Reduction) of various technology combinations or “packages,” in a manner that accounts for synergies between technologies.

  6. Dynamic Load Model using PSO-Based Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu

    This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.

  7. Parameter Estimation for the Thurstone Case III Model.

    ERIC Educational Resources Information Center

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  8. Online parameter estimation for surgical needle steering model.

    PubMed

    Yan, Kai Guo; Podder, Tarun; Xiao, Di; Yu, Yan; Liu, Tien-I; Ling, Keck Voon; Ng, Wan Sing

    2006-01-01

    Estimation of the system parameters, given noisy input/output data, is a major field in control and signal processing. Many different estimation methods have been proposed in recent years. Among various methods, Extended Kalman Filtering (EKF) is very useful for estimating the parameters of a nonlinear and time-varying system. Moreover, it can remove the effects of noises to achieve significantly improved results. Our task here is to estimate the coefficients in a spring-beam-damper needle steering model. This kind of spring-damper model has been adopted by many researchers in studying the tissue deformation. One difficulty in using such model is to estimate the spring and damper coefficients. Here, we proposed an online parameter estimator using EKF to solve this problem. The detailed design is presented in this paper. Computer simulations and physical experiments have revealed that the simulator can estimate the parameters accurately with fast convergent speed and improve the model efficacy.

  9. Uncertainty in dual permeability model parameters for structured soils

    NASA Astrophysics Data System (ADS)

    Arora, B.; Mohanty, B. P.; McGuire, J. T.

    2012-01-01

    Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa) and macropore tortuosity (lf) but also of other parameters of the matrix and macropore domains.

  10. Optimal parameter and uncertainty estimation of a land surface model: Sensitivity to parameter ranges and model complexities

    NASA Astrophysics Data System (ADS)

    Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  11. Oscillating Quintom Model with Time Periodic Varying Deceleration Parameter

    NASA Astrophysics Data System (ADS)

    Shen, Ming; Zhao, Liang

    2014-01-01

    We propose a new law for the deceleration parameter that varies periodically with time. According to the law, we give a model of the oscillating universe with quintom matter in the framework of a 4-dimensional Friedmann—Robertson—Walker background. We find that, in the model, the Hubble parameter oscillates and keeps positive. The universe undergoes decelerating expansion and accelerating expansion alternately without singularity

  12. Using risk-adjustment models to identify high-cost risks.

    PubMed

    Meenan, Richard T; Goodman, Michael J; Fishman, Paul A; Hornbrook, Mark C; O'Keeffe-Rosetti, Maureen C; Bachman, Donald J

    2003-11-01

    We examine the ability of various publicly available risk models to identify high-cost individuals and enrollee groups using multi-HMO administrative data. Five risk-adjustment models (the Global Risk-Adjustment Model [GRAM], Diagnostic Cost Groups [DCGs], Adjusted Clinical Groups [ACGs], RxRisk, and Prior-expense) were estimated on a multi-HMO administrative data set of 1.5 million individual-level observations for 1995-1996. Models produced distributions of individual-level annual expense forecasts for comparison to actual values. Prespecified "high-cost" thresholds were set within each distribution. The area under the receiver operating characteristic curve (AUC) for "high-cost" prevalences of 1% and 0.5% was calculated, as was the proportion of "high-cost" dollars correctly identified. Results are based on a separate 106,000-observation validation dataset. For "high-cost" prevalence targets of 1% and 0.5%, ACGs, DCGs, GRAM, and Prior-expense are very comparable in overall discrimination (AUCs, 0.83-0.86). Given a 0.5% prevalence target and a 0.5% prediction threshold, DCGs, GRAM, and Prior-expense captured $963,000 (approximately 3%) more "high-cost" sample dollars than other models. DCGs captured the most "high-cost" dollars among enrollees with asthma, diabetes, and depression; predictive performance among demographic groups (Medicaid members, members over 64, and children under 13) varied across models. Risk models can efficiently identify enrollees who are likely to generate future high costs and who could benefit from case management. The dollar value of improved prediction performance of the most accurate risk models should be meaningful to decision-makers and encourage their broader use for identifying high costs.

  13. Agricultural and Environmental Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  14. Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1997-01-01

    An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.

  15. Modelling goal adjustment in social relationships: Two experimental studies with children and adults.

    PubMed

    Thomsen, Tamara; Kappes, Cathleen; Schwerdt, Laura; Sander, Johanna; Poller, Charlotte

    2016-10-23

    In two experiments, we investigated observational learning in social relationships as one possible pathway to the development of goal adjustment processes. In the first experiment, 56 children (M = 9.29 years) observed their parent as a model; in the second, 50 adults (M = 32.27 years) observed their romantic partner. Subjects were randomly assigned to three groups: goal engagement (GE), goal disengagement (GD), or control group (CO) and were asked to solve (unsolvable) puzzles. Before trying to solve the puzzles by themselves, subjects observed the instructed model, who was told to continue with the same puzzle (GE) or to switch to the next puzzle (GD). Results show that children in the GE group switched significantly less than in the GD or CO group. There was no difference between the GD group and CO group. Adults in the GE group switched significantly less than in the GD or CO group, whereas subjects in the GD group switched significantly more often than the CO group. Statement of contribution What is already known on this subject? Previous research focused mainly on the functions of goal adjustment processes. It rarely considered processes and conditions that contribute to the development of goal engagement and goal disengagement. There are only two cross-sectional studies that directly investigate this topic. Previous research that claims observational learning as a pathway of learning emotion regulation or adjustment processes has (only) relied on correlational methods and, thus, do not allow any causal interpretations. Previous research, albeit claiming a life span focus, mostly investigated goal adjustment processes in one specific age group (mainly adults). There is no study that investigates the same processes in different age groups. What does this study add? In our two studies, we focus on the conditions of goal adjustment processes and sought to demonstrate one potential pathway of learning or changing the application of goal adjustment

  16. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    SciTech Connect

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  17. Critical parameters of consistent relativistic mean-field models

    NASA Astrophysics Data System (ADS)

    Lourenço, O.; Dutra, M.; Menezes, D. P.

    2017-06-01

    In the present work, the critical temperature, critical pressure, and critical density, known as the critical parameters related to the liquid-gas phase transition are calculated for 34 relativistic mean-field models, which were shown to satisfy nuclear matter constraints in a comprehensive study involving 263 models. The compressibility factor was calculated and all 34 models present values lower than the one obtained with the van der Waals equation of state. The critical temperatures were compared with experimental data and just two classes of models can reach values close to them. A correlation between the critical parameters and the incompressibility was obtained.

  18. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  19. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  20. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  1. Parameter estimation and model selection in computational biology.

    PubMed

    Lillacci, Gabriele; Khammash, Mustafa

    2010-03-05

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  2. The Pennsylvania Trauma Outcomes Study Risk-Adjusted Mortality Model: Results of a Statewide Benchmarking Program.

    PubMed

    Wiebe, Douglas J; Holena, Daniel N; Delgado, M Kit; McWilliams, Nathan; Altenburg, Juliet; Carr, Brendan G

    2017-05-01

    Trauma centers need objective feedback on performance to inform quality improvement efforts. The Trauma Quality Improvement Program recently published recommended methodology for case mix adjustment and benchmarking performance. We tested the feasibility of applying this methodology to develop risk-adjusted mortality models for a statewide trauma system. We performed a retrospective cohort study of patients ≥16 years old at Pennsylvania trauma centers from 2011 to 2013 (n = 100,278). Our main outcome measure was observed-to-expected mortality ratios (overall and within blunt, penetrating, multisystem, isolated head, and geriatric subgroups). Patient demographic variables, physiology, mechanism of injury, transfer status, injury severity, and pre-existing conditions were included as predictor variables. The statistical model had excellent discrimination (area under the curve = 0.94). Funnel plots of observed-to-expected identified five centers with lower than expected mortality and two centers with higher than expected mortality. No centers were outliers for management of penetrating trauma, but five centers had lower and three had higher than expected mortality for blunt trauma. It is feasible to use Trauma Quality Improvement Program methodology to develop risk-adjusted models for statewide trauma systems. Even with smaller numbers of trauma centers that are available in national datasets, it is possible to identify high and low outliers in performance.

  3. An Effective Parameter Screening Strategy for High Dimensional Watershed Models

    NASA Astrophysics Data System (ADS)

    Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.

    2014-12-01

    Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.

  4. Improvements in simulation of atmospheric boundary layer parameters through data assimilation in ARPS mesoscale atmospheric model

    NASA Astrophysics Data System (ADS)

    Subrahamanyam, D. Bala; Ramachandran, Radhika; Kunhikrishnan, P. K.

    2006-12-01

    In a broad sense, 'Data Assimilation' refers to a technique, whereby the realistic observational datasets are injected to a model simulation for bringing accurate forecasts. There are several schemes available for insertion of observational datasets in the model. In this piece of research, we present one of the simplest, yet powerful data assimilation techniques - known as nudging through optimal interpolation in the ARPS (Advanced Regional Prediction System) model. Through this technique, we firstly identify the assimilation window in space and time over which the observational datasets need to be inserted and the model products require to be adjusted. Appropriate model variables are then adjusted for the realistic observational datasets with a proper weightage being given to the observations. Incorporation of such a subroutine in the model that takes care of the assimilation in the model provides a powerful tool for improving the forecast parameters. Such a technique can be very useful in cases, where observational datasets are available at regular intervals. In this article, we demonstrate the effectiveness of this technique for simulation of profiles of Atmospheric Boundary Layer parameters for a tiny island of Kaashidhoo in the Republic of Maldives, where regular GPS Loran Atmospheric Soundings were carried out during the Intensive Field Phase of Indian Ocean Experiment (INDOEX, IFP-99).

  5. [Applying temporally-adjusted land use regression models to estimate ambient air pollution exposure during pregnancy].

    PubMed

    Zhang, Y J; Xue, F X; Bai, Z P

    2017-03-06

    The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.

  6. Parameter Transferability Across Spatial and Temporal Resolutions in Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Melsen, L. A.; Teuling, R.; Torfs, P. J.; Zappa, M.; Mizukami, N.; Clark, M. P.; Uijlenhoet, R.

    2015-12-01

    Improvements in computational power and data availability provided new opportunities for hydrological modeling. The increased complexity of hydrological models, however, also leads to time consuming optimization procedures. Moreover, observations are still required to calibrate the model. Both to decrease calculation time of the optimization and to be able to apply the model in poorly gauged basins, many studies have focused on transferability of parameters. We adopted a probabilistic approach to systematically investigate parameter transferability across both temporal and spatial resolution. A Variable Infiltration Capacity model for the Thur basin (1703km2, Switzerland) was set-up and run at four different spatial resolutions (1x1, 5x5, 10x10km, lumped) and three different temporal resolutions (hourly, daily, monthly). Three objective functions were used to evaluate the model: Kling-Gupta Efficiency (KGE(Q)), Nash-Sutcliffe Efficiency (NSE(Q)) and NSE(logQ). We used a Hierarchical Latin Hypercube Sample (Vorechovsky, 2014) to efficiently sample the most sensitive parameters. The model was run 3150 times and the best 1% of the runs was selected as behavioral. The overlap in selected behavioral sets for different spatial and temporal resolutions was used as indicators for parameter transferability. There was a large overlap in selected sets for the different spatial resolutions, implying that parameters were to a large extent transferable across spatial resolutions. The temporal resolution, however, had a larger impact on the parameters; it significantly affected the parameter distributions for at least four out of seven parameters. The parameter values for the monthly time step were found to be substantially different from those for daily and hourly time steps. This suggests that the output from models which are calibrated on a monthly time step, cannot be interpreted or analysed on an hourly or daily time step. It was also shown that the selected objective

  7. The HHS-HCC Risk Adjustment Model for Individual and Small Group Markets under the Affordable Care Act

    PubMed Central

    Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia

    2014-01-01

    Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula. PMID:25360387

  8. A Low-cost, Off-the-Shelf Ready Field Programmable Gate Array diode Laser Controller With adjustable parameters

    NASA Astrophysics Data System (ADS)

    Yang, Ge; Barry, John. F.; Shuman, Edward; Demille, David

    2010-03-01

    We have constructed a field programmable gate array (FPGA) based lock-in amplifier/PID servo controller for use in laser frequency locking and other applications. Our system is constructed from a commercial FPGA evaluation board with total cost less than 400 and no additional electronic component is required. FPGA technology allows us to implement parallel real-time signal processing with great flexibility. Internal parameters such as the modulation frequency, phase delay, gains and filter time constants, etc. can be changed on the fly within a very wide dynamic range through an iPod-like interface. This system was used to lock a tunable diode laser to an external Fabry Perot cavity with piezo and current feedback. A loop bandwidth of 200 kHz was achieved, limited only by the slow ADCs available on the FPGA board. Further improvements in both hardware and software seem possible, and will be discussed.

  9. Behaviour of the cosmological model with variable deceleration parameter

    NASA Astrophysics Data System (ADS)

    Tiwari, R. K.; Beesham, A.; Shukla, B. K.

    2016-12-01

    We consider the Bianchi type-VI0 massive string universe with decaying cosmological constant Λ. To solve Einstein's field equations, we assume that the shear scalar is proportional to the expansion scalar and that the deceleration parameter q is a linear function of the Hubble parameter H, i.e., q=α +β H, which yields the scale factor a = e^{1/β√{2β t+k1}}. The model expands exponentially with cosmic time t. The value of the cosmological constant Λ is small and positive. Also, we discuss physical parameters as well as the jerk parameter j, which predict that the universe in this model originates as in the Λ CDM model.

  10. Improved input parameters for diffusion models of skin absorption.

    PubMed

    Hansen, Steffi; Lehr, Claus-Michael; Schaefer, Ulrich F

    2013-02-01

    To use a diffusion model for predicting skin absorption requires accurate estimates of input parameters on model geometry, affinity and transport characteristics. This review summarizes methods to obtain input parameters for diffusion models of skin absorption focusing on partition and diffusion coefficients. These include experimental methods, extrapolation approaches, and correlations that relate partition and diffusion coefficients to tabulated physico-chemical solute properties. Exhaustive databases on lipid-water and corneocyte protein-water partition coefficients are presented and analyzed to provide improved approximations to estimate lipid-water and corneocyte protein-water partition coefficients. The most commonly used estimates of lipid and corneocyte diffusion coefficients are also reviewed. In order to improve modeling of skin absorption in the future diffusion models should include the vertical stratum corneum heterogeneity, slow equilibration processes, the absorption from complex non-aqueous formulations, and an improved representation of dermal absorption processes. This will require input parameters for which no suitable estimates are yet available.

  11. Application of physical parameter identification to finite element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1986-01-01

    A time domain technique for matching response predictions of a structural dynamic model to test measurements is developed. Significance is attached to prior estimates of physical model parameters and to experimental data. The Bayesian estimation procedure allows confidence levels in predicted physical and modal parameters to be obtained. Structural optimization procedures are employed to minimize an error functional with physical model parameters describing the finite element model as design variables. The number of complete FEM analyses are reduced using approximation concepts, including the recently developed convoluted Taylor series approach. The error function is represented in closed form by converting free decay test data to a time series model using Prony' method. The technique is demonstrated on simulated response of a simple truss structure.

  12. A six-parameter Iwan model and its application

    NASA Astrophysics Data System (ADS)

    Li, Yikun; Hao, Zhiming

    2016-02-01

    Iwan model is a practical tool to describe the constitutive behaviors of joints. In this paper, a six-parameter Iwan model based on a truncated power-law distribution with two Dirac delta functions is proposed, which gives a more comprehensive description of joints than the previous Iwan models. Its analytical expressions including backbone curve, unloading curves and energy dissipation are deduced. Parameter identification procedures and the discretization method are also provided. A model application based on Segalman et al.'s experiment works with bolted joints is carried out. Simulation effects of different numbers of Jenkins elements are discussed. The results indicate that the six-parameter Iwan model can be used to accurately reproduce the experimental phenomena of joints.

  13. Piecewise parameter estimation for stochastic models in COPASI.

    PubMed

    Bergmann, Frank T; Sahle, Sven; Zimmer, Christoph

    2016-05-15

    Computational modeling is widely used for deepening the understanding of biological processes. Parameterizing models to experimental data needs computationally efficient techniques for parameter estimation. Challenges for parameter estimation include in general the high dimensionality of the parameter space with local minima and in specific for stochastic modeling the intrinsic stochasticity. We implemented the recently suggested multiple shooting for stochastic systems (MSS) objective function for parameter estimation in stochastic models into COPASI. This MSS objective function can be used for parameter estimation in stochastic models but also shows beneficial properties when used for ordinary differential equation models. The method can be applied with all of COPASI's optimization algorithms, and can be used for SBML models as well. The methodology is available in COPASI as of version 4.15.95 and can be downloaded from http://www.copasi.org frank.bergmann@bioquant.uni-heidelberg.de or fbergman@caltech.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 19 94

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Jacobs, C. S.

    1994-01-01

    This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.

  15. Computation of physiological human vocal fold parameters by mathematical optimization of a biomechanical model

    PubMed Central

    Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael

    2011-01-01

    With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808

  16. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  17. Parameters of cosmological models and recent astronomical observations

    SciTech Connect

    Sharov, G.S.; Vorontsova, E.G. E-mail: elenavor@inbox.ru

    2014-10-01

    For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ΛCDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ΛCDM model 1σ estimates of parameters are: H{sub 0}=70.262±0.319 km {sup -1}Mp {sup -1}, Ω{sub m}=0.276{sub -0.008}{sup +0.009}, Ω{sub Λ}=0.769±0.029, Ω{sub k}=-0.045±0.032. The GCG model under restriction 0α≥ is reduced to the ΛCDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z≥2.3.

  18. Parameters of cosmological models and recent astronomical observations

    NASA Astrophysics Data System (ADS)

    Sharov, G. S.; Vorontsova, E. G.

    2014-10-01

    For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ΛCDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ΛCDM model 1σ estimates of parameters are: H0=70.262±0.319 km -1Mp -1, Ωm=0.276-0.008+0.009, ΩΛ=0.769±0.029, Ωk=-0.045±0.032. The GCG model under restriction 0α>= is reduced to the ΛCDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z>=2.3.

  19. Stress and Personal Resource as Predictors of the Adjustment of Parents to Autistic Children: A Multivariate Model

    ERIC Educational Resources Information Center

    Siman-Tov, Ayelet; Kaniel, Shlomo

    2011-01-01

    The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…

  20. Stress and Personal Resource as Predictors of the Adjustment of Parents to Autistic Children: A Multivariate Model

    ERIC Educational Resources Information Center

    Siman-Tov, Ayelet; Kaniel, Shlomo

    2011-01-01

    The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…

  1. Peer Victimization and Rejection: Investigation of an Integrative Model of Effects on Emotional, Behavioral, and Academic Adjustment in Early Adolescence

    ERIC Educational Resources Information Center

    Lopez, Cristy; DuBois, David L.

    2005-01-01

    This study investigated an integrative model of the effects of peer victimization (PV) and peer rejection (PR) on youth adjustment using data from 508 middle-school students. In the proposed model, PV and PR each contribute independently to problems in emotional, behavioral, and academic adjustment. The adverse consequences of PV and PR are each…

  2. Interfacial free energy adjustable phase field crystal model for homogeneous nucleation.

    PubMed

    Guo, Can; Wang, Jincheng; Wang, Zhijun; Li, Junjie; Guo, Yaolin; Huang, Yunhao

    2016-05-18

    To describe the homogeneous nucleation process, an interfacial free energy adjustable phase-field crystal model (IPFC) was proposed by reconstructing the energy functional of the original phase field crystal (PFC) methodology. Compared with the original PFC model, the additional interface term in the IPFC model effectively can adjust the magnitude of the interfacial free energy, but does not affect the equilibrium phase diagram and the interfacial energy anisotropy. The IPFC model overcame the limitation that the interfacial free energy of the original PFC model is much less than the theoretical results. Using the IPFC model, we investigated some basic issues in homogeneous nucleation. From the viewpoint of simulation, we proceeded with an in situ observation of the process of cluster fluctuation and obtained quite similar snapshots to colloidal crystallization experiments. We also counted the size distribution of crystal-like clusters and the nucleation rate. Our simulations show that the size distribution is independent of the evolution time, and the nucleation rate remains constant after a period of relaxation, which are consistent with experimental observations. The linear relation between logarithmic nucleation rate and reciprocal driving force also conforms to the steady state nucleation theory.

  3. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).

  4. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  5. Parameter uncertainty analysis of a biokinetic model of caesium

    DOE PAGES

    Li, W. B.; Klein, W.; Blanchardon, Eric; ...

    2014-04-17

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects atmore » different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.« less

  6. Parameter uncertainty analysis of a biokinetic model of caesium

    SciTech Connect

    Li, W. B.; Klein, W.; Blanchardon, Eric; Puncher, M; Leggett, Richard Wayne; Oeh, U.; Breustedt, B.; Nosske, Dietmar; Lopez, M.

    2014-04-17

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.

  7. Parameter uncertainty analysis of a biokinetic model of caesium.

    PubMed

    Li, W B; Klein, W; Blanchardon, E; Puncher, M; Leggett, R W; Oeh, U; Breustedt, B; Noßke, D; Lopez, M A

    2015-01-01

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.

  8. Adjustment of the k-ω SST turbulence model for prediction of airfoil characteristics near stall

    NASA Astrophysics Data System (ADS)

    Matyushenko, A. A.; Garbaruk, A. V.

    2016-11-01

    A version of k-ra SST turbulence model adjusted for flow around airfoils at high Reynolds numbers is presented. The modified version decreases eddy viscosity and significantly improves the accuracy of prediction of aerodynamic characteristics in a wide range of angles of attack. However, considered reduction of eddy viscosity destroys calibration of the model, which leads to decreasing accuracy of skin-friction coefficient prediction even for relatively simple wall-bounded turbulent flows. Therefore, the area of applicability of the suggested modification is limited to flows around airfoils.

  9. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  10. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Fanselow, J. L.

    1987-01-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  11. Advances in Parameter Estimation and Data Assimilation for Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Sorooshian, S.

    2001-12-01

    In the past two decades, the availability of new data sources (particularly remotely sensed information) and improved computational tools has resulted in significant developments in the field of hydrologic modeling, from simple flow models to complex numerical models which simulate the coupled behavior of multiple fluxes (hydrologic, chemical, energy, etc.). At the same time, significant improvements have taken place in the data assimilation and parameter estimation methods. Although the increasing complexity of models has outpaced the development of appropriate systems identification methodologies, there is a need to design models that are properly constrained by observational data. Independent and collaborative research efforts by various groups worldwide have led to improved modeling techniques, optimization methods for parameter estimation, methods for estimating predictive uncertainty, and methods for evaluating the relative merits of competing models. This talk will review some of the key developments during the past 20 years and speculate on future directions.

  12. Global-scale regionalization of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Miralles, Diego G.; McVicar, Tim R.; Schellekens, Jaap; Bruijnzeel, L. Adrian

    2016-05-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macroscale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10-10,000 km2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the 10 most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially uniform (i.e., averaged calibrated) parameters for 79% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments > 5000 km distant from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV with regionalized parameters outperformed nine state-of-the-art macroscale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via www.gloh2o.org.

  13. Global-scale regionalization of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; van Dijk, Albert; de Roo, Ad; Miralles, Diego; Schellekens, Jaap; McVicar, Tim; Bruijnzeel, Sampurno

    2016-04-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macro-scale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10--10 000~km^2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the ten most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially-uniform (i.e., averaged calibrated) parameters for 79~% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments >5000~km distance from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV using regionalized parameters outperformed nine state-of-the-art macro-scale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via http://water.jrc.ec.europa.eu/HBV/.

  14. Sensor placement for calibration of spatially varying model parameters

    NASA Astrophysics Data System (ADS)

    Nath, Paromita; Hu, Zhen; Mahadevan, Sankaran

    2017-08-01

    This paper presents a sensor placement optimization framework for the calibration of spatially varying model parameters. To account for the randomness of the calibration parameters over space and across specimens, the spatially varying parameter is represented as a random field. Based on this representation, Bayesian calibration of spatially varying parameter is investigated. To reduce the required computational effort during Bayesian calibration, the original computer simulation model is substituted with Kriging surrogate models based on the singular value decomposition (SVD) of the model response and the Karhunen-Loeve expansion (KLE) of the spatially varying parameters. A sensor placement optimization problem is then formulated based on the Bayesian calibration to maximize the expected information gain measured by the expected Kullback-Leibler (K-L) divergence. The optimization problem needs to evaluate the expected K-L divergence repeatedly which requires repeated calibration of the spatially varying parameter, and this significantly increases the computational effort of solving the optimization problem. To overcome this challenge, an approximation for the posterior distribution is employed within the optimization problem to facilitate the identification of the optimal sensor locations using the simulated annealing algorithm. A heat transfer problem with spatially varying thermal conductivity is used to demonstrate the effectiveness of the proposed method.

  15. Identification of Neurofuzzy models using GTLS parameter estimation.

    PubMed

    Jakubek, Stefan; Hametner, Christoph

    2009-10-01

    In this paper, nonlinear system identification utilizing generalized total least squares (GTLS) methodologies in neurofuzzy systems is addressed. The problem involved with the estimation of the local model parameters of neurofuzzy networks is the presence of noise in measured data. When some or all input channels are subject to noise, the GTLS algorithm yields consistent parameter estimates. In addition to the estimation of the parameters, the main challenge in the design of these local model networks is the determination of the region of validity for the local models. The method presented in this paper is based on an expectation-maximization algorithm that uses a residual from the GTLS parameter estimation for proper partitioning. The performance of the resulting nonlinear model with local parameters estimated by weighted GTLS is a product both of the parameter estimation itself and the associated residual used for the partitioning process. The applicability and benefits of the proposed algorithm are demonstrated by means of illustrative examples and an automotive application.

  16. Asymptotically Normal and Efficient Estimation of Covariate-Adjusted Gaussian Graphical Model

    PubMed Central

    Chen, Mengjie; Ren, Zhao; Zhao, Hongyu; Zhou, Harrison

    2015-01-01

    A tuning-free procedure is proposed to estimate the covariate-adjusted Gaussian graphical model. For each finite subgraph, this estimator is asymptotically normal and efficient. As a consequence, a confidence interval can be obtained for each edge. The procedure enjoys easy implementation and efficient computation through parallel estimation on subgraphs or edges. We further apply the asymptotic normality result to perform support recovery through edge-wise adaptive thresholding. This support recovery procedure is called ANTAC, standing for Asymptotically Normal estimation with Thresholding after Adjusting Covariates. ANTAC outperforms other methodologies in the literature in a range of simulation studies. We apply ANTAC to identify gene-gene interactions using an eQTL dataset. Our result achieves better interpretability and accuracy in comparison with CAMPE. PMID:27499564

  17. A spatial model of bird abundance as adjusted for detection probability

    USGS Publications Warehouse

    Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.

    2009-01-01

    Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.

  18. Dynamic Factor Analysis Models with Time-Varying Parameters

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian

    2011-01-01

    Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor…

  19. Separability of Item and Person Parameters in Response Time Models.

    ERIC Educational Resources Information Center

    Van Breukelen, Gerard J. P.

    1997-01-01

    Discusses two forms of separability of item and person parameters in the context of response time models. The first is "separate sufficiency," and the second is "ranking independence." For each form a theorem stating sufficient conditions is proved. The two forms are shown to include several cases of models from psychometric…

  20. Multiple Model Parameter Adaptive Control for In-Flight Simulation.

    DTIC Science & Technology

    1988-03-01

    dynamics of an aircraft. The plant is control- lable by a proportional-plus-integral ( PI ) control law. This section describes two methods of calculating...adaptive model-following PI control law [20-24]. The control law bases its control gains upon the parameters of a linear difference equation model which

  1. Hubble Expansion Parameter in a New Model of Dark Energy

    NASA Astrophysics Data System (ADS)

    Saadat, Hassan

    2012-01-01

    In this study, we consider new model of dark energy based on Taylor expansion of its density and calculate the Hubble expansion parameter for various parameterizations of equation of state. This model is useful to probe a possible evolving of dark energy component in comparison with current observational data.

  2. Separability of Item and Person Parameters in Response Time Models.

    ERIC Educational Resources Information Center

    Van Breukelen, Gerard J. P.

    1997-01-01

    Discusses two forms of separability of item and person parameters in the context of response time models. The first is "separate sufficiency," and the second is "ranking independence." For each form a theorem stating sufficient conditions is proved. The two forms are shown to include several cases of models from psychometric…

  3. Dynamic Factor Analysis Models with Time-Varying Parameters

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian

    2011-01-01

    Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor…

  4. Maximum likelihood estimation for distributed parameter models of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Williams, J. L.

    1989-01-01

    A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.

  5. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques

  6. “A model of mother-child Adjustment in Arab Muslim Immigrants to the US”

    PubMed Central

    Hough, Edythe s; Templin, Thomas N; Kulwicki, Anahid; Ramaswamy, Vidya; Katz, Anne

    2009-01-01

    We examined the mother-child adjustment and child behavior problems in Arab Muslim immigrant families residing in the U.S.A. The sample of 635 mother-child dyads was comprised of mothers who emigrated from 1989 or later and had at least one early adolescent child between the ages of 11 to 15 years old who was also willing to participate. Arabic speaking research assistants collected the data from the mothers and children using established measures of maternal and child stressors, coping, and social support; maternal distress; parent-child relationship; and child behavior problems. A structural equation model (SEM) was specified a priori with 17 predicted pathways. With a few exceptions, the final SEM model was highly consistent with the proposed model and had a good fit to the data. The model accounted for 67% of the variance in child behavior problems. Child stressors, mother-child relationship, and maternal stressors were the causal variables that contributed the most to child behavior problems. The model also accounted for 27% of the variance in mother-child relationship. Child active coping, child gender, mother’s education, and maternal distress were all predictive of the mother-child relationship. Mother-child relationship also mediated the effects of maternal distress and child active coping on child behavior problems. These findings indicate that immigrant mothers contribute greatly to adolescent adjustment, both as a source of risk and protection. These findings also suggest that intervening with immigrant mothers to reduce their stress and strengthening the parent-child relationship are two important areas for promoting adolescent adjustment. PMID:19758737

  7. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  8. Genetic Investigation of Quantitative Traits Related to Autism: Use of Multivariate Polygenic Models with Ascertainment Adjustment

    PubMed Central

    Sung, Yun Ju; Dawson, Geraldine; Munson, Jeffrey; Estes, Annette; Schellenberg, Gerard D.; Wijsman, Ellen M.

    2005-01-01

    Autism is a severe developmental disorder of unknown etiology but with evidence for genetic influences. Here, we provide evidence for a genetic basis of several quantitative traits that are related to autism. These traits, from the Broader Phenotype Autism Symptom Scale (BPASS), were measured in nuclear families, each ascertained through two probands affected by autism spectrum disorder. The BPASS traits capture the continuum of severity of impairments and may be more informative for genetic studies than are the discrete diagnoses of autism that have been used by others. Using a sample of 201 nuclear families consisting of a total of 694 individuals, we implemented multivariate polygenic models with ascertainment adjustment to estimate heritabilities and genetic and environmental correlations between these traits. Our ascertainment adjustment uses conditioning on the phenotypes of probands, requires no modeling of the ascertainment process, and is applicable to multiplex ascertainment and multivariate traits. This appears to be the first such implementation for multivariate quantitative traits. The marked difference between heritability estimates of the trait for language onset with and without an ascertainment adjustment (0.08 and 0.22, respectively) shows that conclusions are sensitive to whether or not an ascertainment adjustment is used. Among the five BPASS traits that were analyzed, the traits for social motivation and range of interest/flexibility show the highest heritability (0.19 and 0.16, respectively) and also have the highest genetic correlation (0.92). This finding suggests a shared genetic basis of these two traits and that they may be most promising for future gene mapping and for extending pedigrees by phenotyping additional relatives. PMID:15547804

  9. Bias adjustment for hydrological modelling - Comparing methods and reference data sets

    NASA Astrophysics Data System (ADS)

    Kpogo-Nuwoklo, Komlan A.; Fischer, Madlen; Rust, Henning W.; Ulbrich, Uwe; Meredith, Edmund; Vagenas, Christos

    2017-04-01

    In the context of the HORIZON 2020 project BINGO (Bringing INnovation to onGOing water management), high-resolution meteorological driving data from the COSMO-CLM regional model are to be bias corrected to be used for various hydrological models. For many of the variables considered, the climate model simulations show a systematic deviation from observation. The expectation value of this error is referred to as bias. To achieve better statistical correspondence between model simulations and the corresponding observational data, it is common practice to adjust this bias prior to subsequent impact studies. A plethora of approaches for bias adjustment have been developed, ranging from simple scaling to more sophisticated approaches. In BINGO, two different approaches are currently used: i) one based on Generalized Linear Model using seasonal covariates and ii) Cumulative Distribution Function Trans- form (CDF-t). While the former aims at correcting the climatological seasonal cycle of simulations to match that of observations, the latter (CDF-t) is based on a cumulative distribution function (CDF) and thus ensures a correction of the full distribution. Focusing on precipitation, a comparison between these two approaches is carried out with two different reference data sets (WATCH and E-OBS). The comparison is mainly based on statistics such as the relative frequency of wet days, the monthly mean and variance. The Wupper catchment (Germany), which is one of the six research sites studied in BINGO, is used as a showcase.

  10. Error-preceding brain activity reflects (mal-)adaptive adjustments of cognitive control: a modeling study.

    PubMed

    Steinhauser, Marco; Eichele, Heike; Juvodden, Hilde T; Huster, Rene J; Ullsperger, Markus; Eichele, Tom

    2012-01-01

    Errors in choice tasks are preceded by gradual changes in brain activity presumably related to fluctuations in cognitive control that promote the occurrence of errors. In the present paper, we use connectionist modeling to explore the hypothesis that these fluctuations reflect (mal-)adaptive adjustments of cognitive control. We considered ERP data from a study in which the probability of conflict in an Eriksen-flanker task was manipulated in sub-blocks of trials. Errors in these data were preceded by a gradual decline of N2 amplitude. After fitting a connectionist model of conflict adaptation to the data, we analyzed simulated N2 amplitude, simulated response times (RTs), and stimulus history preceding errors in the model, and found that the model produced the same pattern as obtained in the empirical data. Moreover, this pattern is not found in alternative models in which cognitive control varies randomly or in an oscillating manner. Our simulations suggest that the decline of N2 amplitude preceding errors reflects an increasing adaptation of cognitive control to specific task demands, which leads to an error when these task demands change. Taken together, these results provide evidence that error-preceding brain activity can reflect adaptive adjustments rather than unsystematic fluctuations of cognitive control, and therefore, that these errors are actually a consequence of the adaptiveness of human cognition.

  11. Improving the realism of hydrologic model through multivariate parameter estimation

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis

    2017-04-01

    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10

  12. Bayesian methods for characterizing unknown parameters of material models

    SciTech Connect

    Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.

    2016-02-04

    A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed to characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.

  13. Bayesian methods for characterizing unknown parameters of material models

    DOE PAGES

    Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.

    2016-02-04

    A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less

  14. Soil-related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2003-07-02

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  15. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    SciTech Connect

    Hansen, Clifford

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  16. The parameter landscape of a mammalian circadian clock model

    NASA Astrophysics Data System (ADS)

    Jolley, Craig; Ueda, Hiroki

    2013-03-01

    In mammals, an intricate system of feedback loops enables autonomous, robust oscillations synchronized with the daily light/dark cycle. Based on recent experimental evidence, we have developed a simplified dynamical model and parameterized it by compiling experimental data on the amplitude, phase, and average baseline of clock gene oscillations. Rather than identifying a single ``optimal'' parameter set, we used Monte Carlo sampling to explore the fitting landscape. The resulting ensemble of model parameter sets is highly anisotropic, with very large variances along some (non-trivial) linear combinations of parameters and very small variances along others. This suggests that our model exhibits ``sloppy'' features that have previously been identified in various multi-parameter fitting problems. We will discuss the implications of this model fitting behavior for the reliability of both individual parameter estimates and systems-level predictions of oscillator characteristics, as well as the impact of experimental constraints. The results of this study are likely to be important both for improved understanding of the mammalian circadian oscillator and as a test case for more general questions about the features of systems biology models.

  17. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  18. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  19. Evaluation of Personnel Parameters in Software Cost Estimating Models

    DTIC Science & Technology

    2007-11-02

    ACAP , 1.42; all other parameters would be set to the nominal value of one. The effort multiplier will be a fixed value if the model uses linear...data. The calculated multiplier values were the 45 Table 8. COSTAR Trials For Multiplier Calculation Run ACAP PCAP PCON APEX PLEX LTEX Effort...impact. Table 9. COCOMO II Personnel Parameters Effort Multipliers Driver Lowest Nominal Highest Analyst Capability ( ACAP ) 1.42 1.00 0.71

  20. Parameters and variables appearing in repository design models

    SciTech Connect

    Curtis, R.H.; Wart, R.J.

    1983-12-01

    This report defines the parameters and variables appearing in repository design models and presents typical values and ranges of values of each. Areas covered by this report include thermal, geomechanical, and coupled stress and flow analyses in rock. Particular emphasis is given to conductivity, radiation, and convection parameters for thermal analysis and elastic constants, failure criteria, creep laws, and joint properties for geomechanical analysis. The data in this report were compiled to help guide the selection of values of parameters and variables to be used in code benchmarking. 102 references, 33 figures, 51 tables.

  1. Application of physical parameter identification to finite-element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1987-01-01

    The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.

  2. Evaluation of experiments for estimation of dynamical crop model parameters.

    PubMed

    Ioslovich, Ilya; Gutman, Per-Olof

    2007-07-01

    Planned experiments are usually expected to provide maximal benefits within limited costs. However, there are known difficulties in optimal design of experiments. They are related to the case when only limited number of parameters could be estimated, because available experiments are noninformative. The useful method for this case is considered based on the dominant parameters selection procedure (DPS). The methodology is illustrated here with data from five planned experiments related to the NICOLET lettuce growth model. The maximal number and the list of estimated parameters are determined while the conditional number of the information Fisher matrix (modified E-criterion) is kept below a given upper constraint.

  3. Dynamically adjustable foot-ground contact model to estimate ground reaction force during walking and running.

    PubMed

    Jung, Yihwan; Jung, Moonki; Ryu, Jiseon; Yoon, Sukhoon; Park, Sang-Kyoon; Koo, Seungbum

    2016-03-01

    Human dynamic models have been used to estimate joint kinetics during various activities. Kinetics estimation is in demand in sports and clinical applications where data on external forces, such as the ground reaction force (GRF), are not available. The purpose of this study was to estimate the GRF during gait by utilizing distance- and velocity-dependent force models between the foot and ground in an inverse-dynamics-based optimization. Ten males were tested as they walked at four different speeds on a force plate-embedded treadmill system. The full-GRF model whose foot-ground reaction elements were dynamically adjusted according to vertical displacement and anterior-posterior speed between the foot and ground was implemented in a full-body skeletal model. The model estimated the vertical and shear forces of the GRF from body kinematics. The shear-GRF model with dynamically adjustable shear reaction elements according to the input vertical force was also implemented in the foot of a full-body skeletal model. Shear forces of the GRF were estimated from body kinematics, vertical GRF, and center of pressure. The estimated full GRF had the lowest root mean square (RMS) errors at the slow walking speed (1.0m/s) with 4.2, 1.3, and 5.7% BW for anterior-posterior, medial-lateral, and vertical forces, respectively. The estimated shear forces were not significantly different between the full-GRF and shear-GRF models, but the RMS errors of the estimated knee joint kinetics were significantly lower for the shear-GRF model. Providing COP and vertical GRF with sensors, such as an insole-type pressure mat, can help estimate shear forces of the GRF and increase accuracy for estimation of joint kinetics. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Parameter identifiability and estimation of HIV/AIDS dynamic models.

    PubMed

    Wu, Hulin; Zhu, Haihong; Miao, Hongyu; Perelson, Alan S

    2008-04-01

    We use a technique from engineering (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005) to investigate the algebraic identifiability of a popular three-dimensional HIV/AIDS dynamic model containing six unknown parameters. We find that not all six parameters in the model can be identified if only the viral load is measured, instead only four parameters and the product of two parameters (N and lambda) are identifiable. We introduce the concepts of an identification function and an identification equation and propose the multiple time point (MTP) method to form the identification function which is an alternative to the previously developed higher-order derivative (HOD) method (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005). We show that the newly proposed MTP method has advantages over the HOD method in the practical implementation. We also discuss the effect of the initial values of state variables on the identifiability of unknown parameters. We conclude that the initial values of output (observable) variables are part of the data that can be used to estimate the unknown parameters, but the identifiability of unknown parameters is not affected by these initial values if the exact initial values are measured with error. These noisy initial values only increase the estimation error of the unknown parameters. However, having the initial values of the latent (unobservable) state variables exactly known may help to identify more parameters. In order to validate the identifiability results, simulation studies are performed to estimate the unknown parameters and initial values from simulated noisy data. We also apply the proposed methods to a clinical data set

  5. Parameter fitting for piano sound synthesis by physical modeling

    NASA Astrophysics Data System (ADS)

    Bensa, Julien; Gipouloux, Olivier; Kronland-Martinet, Richard

    2005-07-01

    A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer-string model. In fact, these parameters are hard to estimate from static measurements, causing the synthesis sounds to be unrealistic. An original approach that estimates the parameters of a piano model, from the measurement of the string vibration, by minimizing a perceptual criterion is proposed. The minimization process that was used is a combination of a gradient method and a simulated annealing algorithm, in order to avoid convergence problems in case of multiple local minima. The criterion, based on the tristimulus concept, takes into account the spectral energy density in three bands, each allowing particular parameters to be estimated. The optimization process has been run on signals measured on an experimental setup. The parameters thus estimated provided a better sound quality than the one obtained using a global energetic criterion. Both the sound's attack and its brightness were better preserved. This quality gain was obtained for parameter values very close to the initial ones, showing that only slight deviations are necessary to make synthetic sounds closer to the real ones.

  6. Parameter Estimation for a crop model: separate and joint calibration of soil and plant parameters

    NASA Astrophysics Data System (ADS)

    Hildebrandt, A.; Jackisch, C.; Luis, S.

    2008-12-01

    Vegetation plays a major role both in the atmospheric and terrestrial water cycle. A great deal of vegetation cover in the developed world consists of agricultural used land (i.e. 44 % of the territory of the EU). Therefore, crop models have become increasingly prominent for studying the impact of Global Change both on economic welfare as well as on influence of vegetation on climate, and feedbacks with hydrological processes. By doing so, it is implied that crop models properly reflect the soil water balance and vertical exchange with the atmosphere. Although crop models can be incorporated in Surface Vegetation Atmosphere Transfer Schemes for that purpose, their main focus has traditionally not been on predicting water and energy fluxes, but yield. In this research we use data from two lysimeters in Brandis (Saxony, Germany), which have been planted with the crops of the surrounding farm, to test the capability of the crop model in SWAP. The lysimeters contain different natural soil cores, leading to substantially different yield. This experiment gives the opportunity to test, if the crop model is portable - that is if a calibrated crop can be moved between different locations. When using the default parameters for the respective environment, the model does neither quantitatively nor qualitatively reproduce the difference in yield and LAI for the different lysimeters. The separate calibration of soil and plant parameter was poor compared to the joint calibration of plant and soil parameters. This suggests that the model is not portable, but needs to be calibrated for individual locations, based on measurements or expert knowledge.

  7. Control of the SCOLE configuration using distributed parameter models

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang

    1994-01-01

    A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.

  8. Control of the SCOLE configuration using distributed parameter models

    NASA Astrophysics Data System (ADS)

    Hsiao, Min-Hung; Huang, Jen-Kuang

    1994-06-01

    A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.

  9. Moment structures of parameter-driven count time series models

    NASA Astrophysics Data System (ADS)

    Bukhari, Nawwal Ahmad; Beng, Koh You; Mohamed, Ibrahim

    2017-05-01

    This paper focuses on a parameter-driven count time series model with three different distributions. We provide a brief description of the first order autoregressive, AR(1) latent process. We consider the first four central moments of each models that are mean, variance, skewness and kurtosis. Next, the autocovariance and autocorrelation functions for each models are derived. We outline and discuss the possible directions of future research.

  10. Family support and acceptance, gay male identity formation, and psychological adjustment: a path model.

    PubMed

    Elizur, Y; Ziv, M

    2001-01-01

    While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.

  11. Utilizing Soize's Approach to Identify Parameter and Model Uncertainties

    SciTech Connect

    Bonney, Matthew S.; Brake, Matthew Robert

    2014-10-01

    Quantifying uncertainty in model parameters is a challenging task for analysts. Soize has derived a method that is able to characterize both model and parameter uncertainty independently. This method is explained with the assumption that some experimental data is available, and is divided into seven steps. Monte Carlo analyses are performed to select the optimal dispersion variable to match the experimental data. Along with the nominal approach, an alternative distribution can be used along with corrections that can be utilized to expand the scope of this method. This method is one of a very few methods that can quantify uncertainty in the model form independently of the input parameters. Two examples are provided to illustrate the methodology, and example code is provided in the Appendix.

  12. QCD-inspired determination of NJL model parameters

    NASA Astrophysics Data System (ADS)

    Springer, Paul; Braun, Jens; Rechenberger, Stefan; Rennecke, Fabian

    2017-03-01

    The QCD phase diagram at finite temperature and density has attracted considerable interest over many decades now, not least because of its relevance for a better understanding of heavy-ion collision experiments. Models provide some insight into the QCD phase structure but usually rely on various parameters. Based on renormalization group arguments, we discuss how the parameters of QCD low-energy models can be determined from the fundamental theory of the strong interaction. We particularly focus on a determination of the temperature dependence of these parameters in this work and comment on the effect of a finite quark chemical potential. We present first results and argue that our findings can be used to improve the predictive power of future model calculations.

  13. Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.

  14. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  15. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    NASA Astrophysics Data System (ADS)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  16. Dynamic Factor Analysis Models With Time-Varying Parameters.

    PubMed

    Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian

    2011-04-11

    Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor model with vector autoregressive relations and time-varying cross-regression parameters at the factor level. Using techniques drawn from the state-space literature, the model was fitted to a set of daily affect data (over 71 days) from 10 participants who had been diagnosed with Parkinson's disease. Our empirical results lend partial support and some potential refinement to the Dynamic Model of Activation with regard to how the time dependencies between positive and negative affects change over time. A simulation study is conducted to examine the performance of the proposed techniques when (a) changes in the time-varying parameters are represented using the true model of change, (b) supposedly time-invariant parameters are represented as time-varying, and

  17. Modelling of intermittent microwave convective drying: parameter sensitivity

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  18. [Parameter uncertainty analysis for urban rainfall runoff modelling].

    PubMed

    Huang, Jin-Liang; Lin, Jie; Du, Peng-Fei

    2012-07-01

    An urban watershed in Xiamen was selected to perform the parameter uncertainty analysis for urban stormwater runoff modeling in terms of identification and sensitivity analysis based on storm water management model (SWMM) using Monte-Carlo sampling and regionalized sensitivity analysis (RSA) algorithm. Results show that Dstore-Imperv, Dstore-Perv and Curve Number (CN) are the identifiable parameters with larger K-S values in hydrological and hydraulic module, and the rank of K-S values in hydrological and hydraulic module is Dstore-Imperv > CN > Dstore-Perv > N-Perv > conductivity > Con-Mann > N-Imperv. With regards to water quality module, the parameters in exponent washoff model including Coefficient and Exponent and the Max. Buildup parameter of saturation buildup model in three land cover types are the identifiable parameters with the larger K-S values. In comparison, the K-S value of rate constant in three landuse/cover types is smaller than that of Max. Buildup, Coefficient and Exponent.

  19. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  20. Comparing spatial and temporal transferability of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Patil, Sopan; Stieglitz, Marc

    2015-04-01

    Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. In our view, such comparison is especially pertinent in the context of increasing appeal and popularity of the "trading space for time" approaches that are proposed for assessing the hydrological implications of anthropogenic climate change. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal

  1. The relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards: A Monte Carlo study

    PubMed Central

    Austin, Peter C.; Reeves, Mathew J.

    2015-01-01

    Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579

  2. The relationship between the C-statistic of a risk-adjustment model and the accuracy of hospital report cards: a Monte Carlo Study.

    PubMed

    Austin, Peter C; Reeves, Mathew J

    2013-03-01

    Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Monte Carlo simulations were used to examine this issue. We examined the influence of 3 factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card.

  3. Integration of Chang'E-2 imagery and LRO laser altimeter data with a combined block adjustment for precision lunar topographic modeling

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Hu, Han; Guo, Jian

    2014-04-01

    Lunar topographic information is essential for lunar scientific investigations and exploration missions. Lunar orbiter imagery and laser altimeter data are two major data sources for lunar topographic modeling. Most previous studies have processed the imagery and laser altimeter data separately for lunar topographic modeling, and there are usually inconsistencies between the derived lunar topographic models. This paper presents a novel combined block adjustment approach to integrate multiple strips of the Chinese Chang'E-2 imagery and NASA's Lunar Reconnaissance Orbiter (LRO) Laser Altimeter (LOLA) data for precision lunar topographic modeling. The participants of the combined block adjustment include the orientation parameters of the Chang'E-2 images, the intra-strip tie points derived from the Chang'E-2 stereo images of the same orbit, the inter-strip tie points derived from the overlapping area of two neighbor Chang'E-2 image strips, and the LOLA points. Two constraints are incorporated into the combined block adjustment including a local surface constraint and an orbit height constraint, which are specifically designed to remedy the large inconsistencies between the Chang'E-2 and LOLA data sets. The output of the combined block adjustment is the improved orientation parameters of the Chang'E-2 images and ground coordinates of the LOLA points, from which precision lunar topographic models can be generated. The performance of the developed approach was evaluated using the Chang'E-2 imagery and LOLA data in the Sinus Iridum area and the Apollo 15 landing area. The experimental results revealed that the mean absolute image residuals between the Chang'E-2 image strips were drastically reduced from tens of pixels before the adjustment to sub-pixel level after adjustment. Digital elevation models (DEMs) with 20 m resolution were generated using the Chang'E-2 imagery after the combined block adjustment. Comparison of the Chang'E-2 DEM with the LOLA DEM showed a good

  4. Estimation of dynamic stability parameters from drop model flight tests

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Iliff, K. W.

    1981-01-01

    A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.

  5. Models wagging the dog: are circuits constructed with disparate parameters?

    PubMed

    Nowotny, Thomas; Szücs, Attila; Levi, Rafael; Selverston, Allen I

    2007-08-01

    In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken.

  6. Do land parameters matter in large-scale hydrological modelling?

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2013-04-01

    Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land

  7. Estimation of the parameters of ETAS models by Simulated Annealing.

    PubMed

    Lombardi, Anna Maria

    2015-02-12

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  8. Estimation of the parameters of ETAS models by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Lombardi, Anna Maria

    2015-02-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  9. Estimation of the parameters of ETAS models by Simulated Annealing

    PubMed Central

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036

  10. Resolving model parameter values from carbon and nitrogen stock measurements in a wide range of tropical mature forests using nonlinear inversion and regression trees

    USGS Publications Warehouse

    Liu, S.; Anderson, P.; Zhou, G.; Kauffman, B.; Hughes, F.; Schimel, D.; Watson, Vicente; Tosi, Joseph

    2008-01-01

    Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in seven life zones in Costa Rica. Net primary productivity from the Moderate-Resolution Imaging Spectroradiometer (MODIS), C and N stocks in aboveground live biomass, litter, coarse woody debris (CWD), and in soils were used to calibrate the model. To investigate the resolution of available observations on the number of adjustable parameters, inversion was performed using nine setups of adjustable parameters. Statistics including observation sensitivity, parameter correlation coefficient, parameter sensitivity, and parameter confidence limits were used to evaluate the information content of observations, resolution of model parameters, and overall model performance. Results indicated that soil organic carbon content, soil nitrogen content, and total aboveground biomass carbon had the highest information contents, while measurements of carbon in litter and nitrogen in CWD contributed little to the parameter estimation processes. The available information could resolve the values of 2-4 parameters. Adjusting just one parameter resulted in under-fitting and unacceptable model performance, while adjusting five parameters simultaneously led to over-fitting. Results further indicated that the MODIS NPP values were compressed as compared with the spatial variability of net primary production (NPP) values inferred from inverse modeling. Using inverse modeling to infer NPP and other sensitive model parameters from C and N stock observations provides an opportunity to utilize data collected by national to regional forest inventory systems to reduce the uncertainties in the carbon cycle and generate valuable

  11. Climate change decision-making: Model & parameter uncertainties explored

    SciTech Connect

    Dowlatabadi, H.; Kandlikar, M.; Linville, C.

    1995-12-31

    A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.

  12. Resolution of a Rank-Deficient Adjustment Model Via an Isomorphic Geometrical Setup with Tensor Structure.

    DTIC Science & Technology

    1987-03-01

    AFGL,-TR-87-0102 4. TITLE (ad Subtile) S . TYPE Of REPORT & ERIOD COVERED RESOLUTION OF A RANK-DEFICIENT ADJUSTMENT MODEL Final Report. VIA AN...transformation of multiple integrals. i IVnc’lass it i cd e- S CURITY C1 AIrIC ATIOIN O f THIS PAG P .𔃻 ’ FnI.f* d) TABLE OF CONTENTS Page ABSTRACT i...associated metric tensor is then given as g = krIs +jrs r ... + s + while the necessary associated metric tensor is -4 grs =ars jr s ’g = +jj 4 .... where

  13. Risk Adjustment for Determining Surgical Site Infection in Colon Surgery: Are All Models Created Equal?

    PubMed

    Muratore, Sydne; Statz, Catherine; Glover, J J; Kwaan, Mary; Beilman, Greg

    2016-04-01

    Colon surgical site infections (SSIs) are being utilized increasingly as a quality measure for hospital reimbursement and public reporting. The Centers for Medicare and Medicaid Services (CMS) now require reporting of colon SSI, which is entered through the U.S. Centers for Disease Control and Prevention's National Healthcare Safety Network (NHSN). However, the CMS's model for determining expected SSIs uses different risk adjustment variables than does NHSN. We hypothesize that CMS's colon SSI model will predict lower expected infection rates than will NHSN. Colon SSI data were reported prospectively to NHSN from 2012-2014 for the six Fairview Hospitals (1,789 colon procedures). We compared expected quarterly SSIs and standardized infection ratios (SIRs) generated by CMS's risk-adjustment model (age and American Society of Anesthesiologist [ASA] classification) vs. NHSN's (age, ASA classification, procedure duration, endoscope [including laparoscope] use, medical school affiliation, hospital bed number, and incision class). The patients with more complex colon SSIs were more likely to be male (60% vs. 44%; p = 0.011), to have contaminated/dirty incisions (21% vs. 10%; p = 0.005), and to have longer operations (235 min vs. 156 min; p < 0.001) and were more likely to be at a medical school-affiliated hospital (53% vs. 40%; p = 0.032). For Fairview Hospitals combined, CMS calculated a lower number of expected quarterly SSIs than did the NHSN (4.58 vs. 5.09 SSIs/quarter; p = 0.002). This difference persisted in a university hospital (727 procedures; 2.08 vs. 2.33; p = 0.002) and a smaller, community-based hospital (565 procedures; 1.31 vs. 1.42; p = 0.002). There were two quarters in which CMS identified Fairview's SIR as an outlier for complex colon SSIs (p = 0.05 and 0.04), whereas NHSN did not (p = 0.06 and 0.06). The CMS's current risk-adjustment model using age and ASA classification predicts lower rates of expected colon

  14. A molecular collision operator of adjustable direction for the discrete velocity direction model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenyu; Peng, Cheng; Xu, Jianzhong

    2017-10-01

    The discrete velocity direction model is an approximate method to the Boltzmann equation. A developed molecular collision operator for the model is presented in this paper. Under the new operator, the discrete directions of molecules are adjustable, namely, both the number and the angles of discrete directions can be changed as needed in the discrete velocity direction model. At the same time, the governing equations will keep unchanged when the number of discrete directions changes. In fact, with the continuous molecular speed, the discrete velocity direction model has been able to employ any discrete velocities in numerical calculations. The discrete velocity direction model under the new collision operator was applied into some benchmark flows in micro scales in this paper, and the influence of the number of discrete velocities on the computational accuracy was analyzed. The numerical results show that the accuracy of the discrete velocity direction model can be improved significantly by employing more discrete directions, especially for the gas flows at large Knudsen number. With appropriate discrete velocities, this model has been able to give accurate numerical results in all flow regimes. In addition, it is proved that the discrete velocity direction model under the new collision operator satisfies a global H theorem unconditionally, which means that the new operator further improves the intrinsic stability of the discrete velocity direction model.

  15. Force Field Independent Metal Parameters Using a Nonbonded Dummy Model

    PubMed Central

    2014-01-01

    The cationic dummy atom approach provides a powerful nonbonded description for a range of alkaline-earth and transition-metal centers, capturing both structural and electrostatic effects. In this work we refine existing literature parameters for octahedrally coordinated Mn2+, Zn2+, Mg2+, and Ca2+, as well as providing new parameters for Ni2+, Co2+, and Fe2+. In all the cases, we are able to reproduce both M2+–O distances and experimental solvation free energies, which has not been achieved to date for transition metals using any other model. The parameters have also been tested using two different water models and show consistent performance. Therefore, our parameters are easily transferable to any force field that describes nonbonded interactions using Coulomb and Lennard-Jones potentials. Finally, we demonstrate the stability of our parameters in both the human and Escherichia coli variants of the enzyme glyoxalase I as showcase systems, as both enzymes are active with a range of transition metals. The parameters presented in this work provide a valuable resource for the molecular simulation community, as they extend the range of metal ions that can be studied using classical approaches, while also providing a starting point for subsequent parametrization of new metal centers. PMID:24670003

  16. An information approach to regularization parameter selection under model misspecification

    NASA Astrophysics Data System (ADS)

    Urmanov, A. M.; Gribok, A. V.; Hines, J. W.; Uhrig, R. E.

    2002-10-01

    We review the information approach to regularization parameter selection and its information complexity extension for the solution of discrete ill posed problems. An information criterion for regularization parameter selection was first proposed by Shibata in the context of ridge regression as an extension of Takeuchi's information criterion. In the information approach, the regularization parameter value is chosen to maximize the mean expected log likelihood (MELL) of a model whose parameters are estimated using the maximum penalized likelihood method. Under the Gaussian noise assumption such a choice coincides with the minimum of mean predictive error choice. Maximization of the MELL corresponds to minimization of the mean Kullback-Leibler information, that measures the deviation of the approximating (model) distribution from the true one. The resulting regularization parameter selection methods can handle possible functional and distributional misspecifications when the usual assumptions of Gaussian noise and/or linear relationship have been made but not met. We also suggest that in engineering applications it is beneficial to find ways of lowering the risk of getting grossly under-regularized solutions and that the new information complexity regularization parameter selection method (RPSM) is one of the possibilities. Several examples of applying the reviewed RPSMs are given.

  17. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study

    PubMed Central

    Girling, Alan J; Hofer, Timothy P; Wu, Jianhua; Chilton, Peter J; Nicholl, Jonathan P; Mohammed, Mohammed A; Lilford, Richard J

    2012-01-01

    Risk-adjustment schemes are used to monitor hospital performance, on the assumption that excess mortality not explained by case mix is largely attributable to suboptimal care. We have developed a model to estimate the proportion of the variation in standardised mortality ratios (SMRs) that can be accounted for by variation in preventable mortality. The model was populated with values from the literature to estimate a predictive value of the SMR in this context—specifically the proportion of those hospitals with SMRs among the highest 2.5% that fall among the worst 2.5% for preventable mortality. The extent to which SMRs reflect preventable mortality rates is highly sensitive to the proportion of deaths that are preventable. If 6% of hospital deaths are preventable (as suggested by the literature), the predictive value of the SMR can be no greater than 9%. This value could rise to 30%, if 15% of deaths are preventable. The model offers a ‘reality check’ for case mix adjustment schemes designed to isolate the preventable component of any outcome rate. PMID:23069860

  18. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study.

    PubMed

    Girling, Alan J; Hofer, Timothy P; Wu, Jianhua; Chilton, Peter J; Nicholl, Jonathan P; Mohammed, Mohammed A; Lilford, Richard J

    2012-12-01

    Risk-adjustment schemes are used to monitor hospital performance, on the assumption that excess mortality not explained by case mix is largely attributable to suboptimal care. We have developed a model to estimate the proportion of the variation in standardised mortality ratios (SMRs) that can be accounted for by variation in preventable mortality. The model was populated with values from the literature to estimate a predictive value of the SMR in this context-specifically the proportion of those hospitals with SMRs among the highest 2.5% that fall among the worst 2.5% for preventable mortality. The extent to which SMRs reflect preventable mortality rates is highly sensitive to the proportion of deaths that are preventable. If 6% of hospital deaths are preventable (as suggested by the literature), the predictive value of the SMR can be no greater than 9%. This value could rise to 30%, if 15% of deaths are preventable. The model offers a 'reality check' for case mix adjustment schemes designed to isolate the preventable component of any outcome rate.

  19. Macroscopic singlet oxygen model incorporating photobleaching as an input parameter

    NASA Astrophysics Data System (ADS)

    Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.

    2015-03-01

    A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.

  20. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  1. Generating Effective Models and Parameters for RNA Genetic Circuits.

    PubMed

    Hu, Chelsea Y; Varner, Jeffrey D; Lucks, Julius B

    2015-08-21

    RNA genetic circuitry is emerging as a powerful tool to control gene expression. However, little work has been done to create a theoretical foundation for RNA circuit design. A prerequisite to this is a quantitative modeling framework that accurately describes the dynamics of RNA circuits. In this work, we develop an ordinary differential equation model of transcriptional RNA genetic circuitry, using an RNA cascade as a test case. We show that parameter sensitivity analysis can be used to design a set of four simple experiments that can be performed in parallel using rapid cell-free transcription-translation (TX-TL) reactions to determine the 13 parameters of the model. The resulting model accurately recapitulates the dynamic behavior of the cascade, and can be easily extended to predict the function of new cascade variants that utilize new elements with limited additional characterization experiments. Interestingly, we show that inconsistencies between model predictions and experiments led to the model-guided discovery of a previously unknown maturation step required for RNA regulator function. We also determine circuit parameters in two different batches of TX-TL, and show that batch-to-batch variation can be attributed to differences in parameters that are directly related to the concentrations of core gene expression machinery. We anticipate the RNA circuit models developed here will inform the creation of computer aided genetic circuit design tools that can incorporate the growing number of RNA regulators, and that the parametrization method will find use in determining functional parameters of a broad array of natural and synthetic regulatory systems.

  2. A model for adjustment of differential gravity measurements with simultaneous gravimeter calibration

    NASA Astrophysics Data System (ADS)

    Dias, F. J. S. S.; Escobar, Í. P.

    2001-05-01

    A mathematical model is proposed for adjustment of differential or relative gravity measurements, involving simultaneously instrumental readings, coefficients of the calibration function, and gravity values of selected base stations. Tests were performed with LaCoste and Romberg model G gravimeter measurements for a set of base stations located along a north-south line with 1750 mGal gravity range. This line was linked to nine control stations, where absolute gravity values had been determined by the free-fall method, with an accuracy better than 10 wGal. The model shows good consistence and stability. Results show the possibility of improving the calibration functions of gravimeters, as well as a better estimation of the gravity values, due to the flexibility admitted to the values of the calibration coefficients.

  3. Assimilation of surface data in a one-dimensional physical-biogeochemical model of the surface ocean: 2. Adjusting a simple trophic model to chlorophyll, temperature, nitrate, and pCO{sub 2} data

    SciTech Connect

    Prunet, P.; Minster, J.F.; Echevin, V.

    1996-03-01

    This paper builds on a previous work which produced a constrained physical-biogeochemical model of the carbon cycle in the surface ocean. Three issues are addressed: (1) the results of chlorophyll assimilation using a simpler trophic model, (2) adjustment of parameters using the simpler model and data other than surface chlorophyll concentrations, and (3) consistency of the main carbon fluxes derived by the simplified model with values from the more complex model. A one-dimensional vertical model coupling the physics of the ocean mixed layer and a description of biogeochemical processes with a simple trophic model was used to address these issues. Chlorophyll concentration, nitrate concentration, and temperature were used to constrain the model. The surface chlorophyll information was shown to be sufficient to constrain primary production within the photic layer. The simultaneous assimilation of chlorophyll, nitrate, and temperature resulted in a significant improvement of model simulation for the data used. Of the nine biological and physical parameters which resulted in significant variations of the simulated chlorophyll concentration, seven linear combinations of the mode parameters were constrained. The model fit was an improvement on independent surface chlorophyll and nitrate data. This work indicates that a relatively simple biological model is sufficient to describe carbon fluxes. Assimilation of satellite or climatological data coulc be used to adjust the parameters of the model for three-dimensional models. It also suggests that the main carbon fluxes driving the carbon cycle within surface waters could be derived regionally from surface information. 38 refs., 16 figs., 7 tabs.

  4. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  5. External validation of established risk adjustment models for procedural complications after percutaneous coronary intervention.

    PubMed

    Kunadian, B; Dunning, J; Das, R; Roberts, A P; Morley, R; Turley, A J; Twomey, D; Hall, J A; Wright, R A; Sutton, A G C; Muir, D F; de Belder, M A

    2008-08-01

    Workable risk models for patients undergoing percutaneous coronary intervention (PCI) are needed urgently. To validate two proposed risk adjustment models (Mayo Clinic Risk Score (MC), USA and North West Quality Improvement Programme (NWQIP), UK models) for in-hospital PCI complications on an independent dataset of relatively high risk patients undergoing PCI. Tertiary centre in northern England. Between September 2002 and August 2006, 5034 consecutive PCI procedures (validation set) were performed on a patient group characterised by a high incidence of acute myocardial infarction (MI; 16.1%) and cardiogenic shock (1.7%). Two external models-the NWQIP model and the MC model-were externally validated. Major adverse cardiovascular and cerebrovascular events: in-hospital mortality, Q-wave MI, emergency coronary artery bypass grafting and cerebrovascular accidents. An overall in-hospital complication rate of 2% was observed. Multivariate regression analysis identified risk factors for in-hospital complications that were similar to the risk factors identified by the two external models. When fitted to the dataset, both external models had an area under the receiver operating characteristic curve >or=0.85 (c index (95% CI), NWQIP 0.86 (0.82 to 0.9); MC 0.87(0.84 to 0.9)), indicating overall excellent model discrimination and calibration (Hosmer-Lemeshow test, p>0.05). The NWQIP model was accurate in predicting in-hospital complications in different patient subgroups. Both models were externally validated. Both predictive models yield comparable results that provide excellent model discrimination and calibration when applied to patient groups in a different geographic population other than that in which the original model was developed.

  6. Iterative integral parameter identification of a respiratory mechanics model

    PubMed Central

    2012-01-01

    Background Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual’s model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. Methods An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. Results The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. Conclusion These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application. PMID:22809585

  7. Important observations and parameters for a salt water intrusion model

    USGS Publications Warehouse

    Shoemaker, W.B.

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.

  8. Separation-Individuation, Family Cohesion, and Adjustment to College: Measurement Validation and Test of a Theoretical Model.

    ERIC Educational Resources Information Center

    Rice, Kenneth G.; And Others

    1990-01-01

    Examined relation between adolescent separation-individuation, family cohesion, and college adjustment in college students (N=240). First group was used to explore individuation measures. Theoretical model specifying that college adjustment would be predicted by family cohesion, positive separation feelings, and independence from parents, was…

  9. A pressure consistent bridge correction of Kovalenko-Hirata closure in Ornstein-Zernike theory for Lennard-Jones fluids by apparently adjusting sigma parameter

    SciTech Connect

    Ebato, Yuki; Miyata, Tatsuhiko

    2016-05-15

    Ornstein-Zernike (OZ) integral equation theory is known to overestimate the excess internal energy, U{sup ex}, pressure through the virial route, P{sub v}, and excess chemical potential, μ{sup ex}, for one-component Lennard-Jones (LJ) fluids under hypernetted chain (HNC) and Kovalenko-Hirata (KH) approximatons. As one of the bridge correction methods to improve the precision of these thermodynamic quantities, it was shown in our previous paper that the method to apparently adjust σ parameter in the LJ potential is effective [T. Miyata and Y. Ebato, J. Molec. Liquids. 217, 75 (2016)]. In our previous paper, we evaluated the actual variation in the σ parameter by using a fitting procedure to molecular dynamics (MD) results. In this article, we propose an alternative method to determine the actual variation in the σ parameter. The proposed method utilizes a condition that the virial and compressibility pressures coincide with each other. This method can correct OZ theory without a fitting procedure to MD results, and possesses characteristics of keeping a form of HNC and/or KH closure. We calculate the radial distribution function, pressure, excess internal energy, and excess chemical potential for one-component LJ fluids to check the performance of our proposed bridge function. We discuss the precision of these thermodynamic quantities by comparing with MD results. In addition, we also calculate a corrected gas-liquid coexistence curve based on a corrected KH-type closure and compare it with MD results.

  10. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    PubMed

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  11. Microscopic calculation of interacting boson model parameters by potential-energy surface mapping

    SciTech Connect

    Bentley, I.; Frauendorf, S.

    2011-06-15

    A coherent state technique is used to generate an interacting boson model (IBM) Hamiltonian energy surface which is adjusted to match a mean-field energy surface. This technique allows the calculation of IBM Hamiltonian parameters, prediction of properties of low-lying collective states, as well as the generation of probability distributions of various shapes in the ground state of transitional nuclei, the last two of which are of astrophysical interest. The results for krypton, molybdenum, palladium, cadmium, gadolinium, dysprosium, and erbium nuclei are compared with experiment.

  12. Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series

    NASA Astrophysics Data System (ADS)

    Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik

    2016-06-01

    Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model

  13. Kalman filter estimation of human pilot-model parameters

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.

    1975-01-01

    The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.

  14. Is it getting hot in here? Adjustment of hydraulic parameters in six boreal and temperate tree species after 5 years of warming.

    PubMed

    McCulloh, Katherine A; Petitmermet, Joshua; Stefanski, Artur; Rice, Karen E; Rich, Roy L; Montgomery, Rebecca A; Reich, Peter B

    2016-12-01

    Global temperatures (T) are rising, and for many plant species, their physiological response to this change has not been well characterized. In particular, how hydraulic parameters may change has only been examined experimentally for a few species. To address this, we measured characteristics of the hydraulic architecture of six species growing in ambient T and ambient +3.4 °C T plots in two experimentally warmed forest sites in Minnesota. These sites are at the temperate-boreal ecotone, and we measured three species from each forest type. We hypothesized that relative to boreal species, temperate species near their northern range border would increase xylem conduit diameters when grown under elevated T. We also predicted a continuum of responses among wood types, with conduit diameter increases correlating with increases in the complexity of wood structure. Finally, we predicted that increases in conduit diameter and specific hydraulic conductivity would positively affect photosynthetic rates and growth. Our results generally supported our hypotheses, and conduit diameter increased under elevated T across all species, although this pattern was driven predominantly by three species. Two of these species were temperate angiosperms, but one was a boreal conifer, contrary to predictions. We observed positive relationships between the change in specific hydraulic conductivity and both photosynthetic rate (P = 0.080) and growth (P = 0.012). Our results indicate that species differ in their ability to adjust hydraulically to increases in T. Specifically, species with more complex xylem anatomy, particularly those individuals growing near the cooler edge of their range, appeared to be better able to increase conduit diameters and specific hydraulic conductivity, which permitted increases in photosynthesis and growth. Our data support results that indicate individual's ability to physiologically adjust is related to their location within their species range, and

  15. Prediction of interest rate using CKLS model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-01

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ(j) of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j'-th time point where j≤j'≤j+n. To model the variation of φ(j), we assume that φ(j) depends on φ(j-m), φ(j-m+1),…, φ(j-1) and the interest rate rj+n at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value rj+n+1 of the interest rate at the next time point when the value rj+n of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate rj+n+d at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  16. Prediction of interest rate using CKLS model with stochastic parameters

    SciTech Connect

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  17. A Hamiltonian Model of Generator With AVR and PSS Parameters*

    NASA Astrophysics Data System (ADS)

    Qian, Jing.; Zeng, Yun.; Zhang, Lixiang.; Xu, Tianmao.

    Take the typical thyristor excitation system including the automatic voltage regulator (AVR) and the power system stabilizer (PSS) as an example, the supply rate of AVR and PSS branch are selected as the energy function of controller, and that is added to the Hamiltonian function of the generator to compose the total energy function. By proper transformation, the standard form of the Hamiltonian model of the generator including AVR and PSS is derived. The structure matrix and damping matrix of the model include feature parameters of AVR and PSS, which gives a foundation to study the interaction mechanism of parameters between AVR, PSS and the generator. Finally, the structural relationships and interactions of the system model are studied, the results show that the relationship of structure and damping characteristic reflected by model consistent with practical system.

  18. Assessing parameter identifiability in phylogenetic models using data cloning.

    PubMed

    Ponciano, José Miguel; Burleigh, J Gordon; Braun, Edward L; Taper, Mark L

    2012-12-01

    The success of model-based methods in phylogenetics has motivated much research aimed at generating new, biologically informative models. This new computer-intensive approach to phylogenetics demands validation studies and sound measures of performance. To date there has been little practical guidance available as to when and why the parameters in a particular model can be identified reliably. Here, we illustrate how Data Cloning (DC), a recently developed methodology to compute the maximum likelihood estimates along with their asymptotic variance, can be used to diagnose structural parameter nonidentifiability (NI) and distinguish it from other parameter estimability problems, including when parameters are structurally identifiable, but are not estimable in a given data set (INE), and when parameters are identifiable, and estimable, but only weakly so (WE). The application of the DC theorem uses well-known and widely used Bayesian computational techniques. With the DC approach, practitioners can use Bayesian phylogenetics software to diagnose nonidentifiability. Theoreticians and practitioners alike now have a powerful, yet simple tool to detect nonidentifiability while investigating complex modeling scenarios, where getting closed-form expressions in a probabilistic study is complicated. Furthermore, here we also show how DC can be used as a tool to examine and eliminate the influence of the priors, in particular if the process of prior elicitation is not straightforward. Finally, when applied to phylogenetic inference, DC can be used to study at least two important statistical questions: assessing identifiability of discrete parameters, like the tree topology, and developing efficient sampling methods for computationally expensive posterior densities.

  19. Assessing Parameter Identifiability in Phylogenetic Models Using Data Cloning

    PubMed Central

    Ponciano, José Miguel; Burleigh, J. Gordon; Braun, Edward L.; Taper, Mark L.

    2012-01-01

    The success of model-based methods in phylogenetics has motivated much research aimed at generating new, biologically informative models. This new computer-intensive approach to phylogenetics demands validation studies and sound measures of performance. To date there has been little practical guidance available as to when and why the parameters in a particular model can be identified reliably. Here, we illustrate how Data Cloning (DC), a recently developed methodology to compute the maximum likelihood estimates along with their asymptotic variance, can be used to diagnose structural parameter nonidentifiability (NI) and distinguish it from other parameter estimability problems, including when parameters are structurally identifiable, but are not estimable in a given data set (INE), and when parameters are identifiable, and estimable, but only weakly so (WE). The application of the DC theorem uses well-known and widely used Bayesian computational techniques. With the DC approach, practitioners can use Bayesian phylogenetics software to diagnose nonidentifiability. Theoreticians and practitioners alike now have a powerful, yet simple tool to detect nonidentifiability while investigating complex modeling scenarios, where getting closed-form expressions in a probabilistic study is complicated. Furthermore, here we also show how DC can be used as a tool to examine and eliminate the influence of the priors, in particular if the process of prior elicitation is not straightforward. Finally, when applied to phylogenetic inference, DC can be used to study at least two important statistical questions: assessing identifiability of discrete parameters, like the tree topology, and developing efficient sampling methods for computationally expensive posterior densities. PMID:22649181

  20. Investigation of land use effects on Nash model parameters

    NASA Astrophysics Data System (ADS)

    Niazi, Faegheh; Fakheri Fard, Ahmad; Nourani, Vahid; Goodrich, David; Gupta, Hoshin

    2015-04-01

    Flood forecasting is of great importance in hydrologic planning, hydraulic structure design, water resources management and sustainable designs like flood control and management. Nash's instantaneous unit hydrograph is frequently used for simulating hydrological response in natural watersheds. Urban hydrology is gaining more attention due to population increases and associated construction escalation. Rapid development of urban areas affects the hydrologic processes of watersheds by decreasing soil permeability, flood base flow, lag time and increase in flood volume, peak runoff rates and flood frequency. In this study the influence of urbanization on the significant parameters of the Nash model have been investigated. These parameters were calculated using three popular methods (i.e. moment, root mean square error and random sampling data generation), in a small watershed consisting of one natural sub-watershed which drains into a residentially developed sub-watershed in the city of Sierra Vista, Arizona. The results indicated that for all three methods, the lag time, which is product of Nash parameters "K" and "n", in the natural sub-watershed is greater than the developed one. This logically implies more storage and/or attenuation in the natural sub-watershed. The median K and n parameters derived from the three methods using calibration events were tested via a set of verification events. The results indicated that all the three method have acceptable accuracy in hydrograph simulation. The CDF curves and histograms of the parameters clearly show the difference of the Nash parameter values between the natural and developed sub-watersheds. Some specific upper and lower percentile values of the median of the generated parameters (i.e. 10, 20 and 30 %) were analyzed to future investigates the derived parameters. The model was sensitive to variations in the value of the uncertain K and n parameter. Changes in n are smaller than K in both sub-watersheds indicating

  1. Estimation of dynamic stability parameters from drop model flight tests

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Iliff, K. W.

    1981-01-01

    The overall remotely piloted drop model operation, descriptions, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods are discussed. Static and dynamic stability derivatives were obtained for an angle attack range from -20 deg to 53 deg. It is indicated that the variations of the estimates with angle of attack are consistent for most of the static derivatives, and the effects of configuration modifications to the model were apparent in the static derivative estimates.

  2. Constraint on Seesaw Model Parameters with Electroweak Vacuum Stability

    NASA Astrophysics Data System (ADS)

    Okane, H.; Morozumi, T.

    2017-03-01

    Within the standard model, the electroweak vacuum is metastable. We study how heavy right-handed neutrinos in seesaw model have impact on the stability through their loop effect for the Higgs potential. Requiring the lifetime of the electroweak vacuum is longer than the age of the Universe, the constraint on parameters such as their masses and the strength of the Yukawa couplings is obtained.

  3. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  4. Left-right-symmetric model parameters: Updated bounds

    SciTech Connect

    Polak, J.; Zralek, M. )

    1992-11-01

    Using the available updated experimental data, including the last results from the CERN {ital e}{sup +}{ital e{minus}} collider LEP and improved parity-violation results, we find new constraints on the parameters in the left-right-symmetric model in the case of light right-handed neutrinos.

  5. Parabolic problems with parameters arising in evolution model for phytromediation

    NASA Astrophysics Data System (ADS)

    Sahmurova, Aida; Shakhmurov, Veli

    2012-12-01

    The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).

  6. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  7. Integrating microbial diversity in soil carbon dynamic models parameters

    NASA Astrophysics Data System (ADS)

    Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie

    2015-04-01

    Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten

  8. Parameter Perturbations with the GFDL Model: Smoothness and Uncertainty

    NASA Astrophysics Data System (ADS)

    Zamboni, L.; Jacob, R. L.; Neelin, J.; Kotamarthi, V. R.; Held, I.; Zhao, M.; Williams, T. J.; McWilliams, J. C.; Moore, T. L.; Wilde, M.; Nangia, N.

    2013-12-01

    We found that smoothness characterizes the response of global precipitation to perturbations of 6 parameters related to cloud physics and circulation in 50-year AMIP simulations performed with the GFDL model at 1x1 degree resolution. Specifically, the AGCM depends quadratically to parameters (Fig.1a). Linearization of the derivative of a cost function (the globally averaged squared difference between model and observations; here illustrated for the entrainment rate) up to at least the 2nd order around the standard case (eo=10) proofs necessary for optimization purposes to correctly predict where the optimum value lies (Fig.1b), and reflects the relevance of the non linearity of the response. The linearization also provides indications about desirable changes in the parameters' values for regional optimization, which may be locally different from that of the global average. Uncertainty of precipitation varies from -9 to 6% of the model's standard version and is highest for the ice-fall-speed in stratiform clouds and the entrainment in convective clouds, which are the parameters with the widest range of possible values (Fig.2). The smooth behavior and a quantified measure of the sensitivity we report here are the backbones for the design of computationally effective multi-parameter perturbations and model optimization, which ultimately improve the reliability of AGCMs simulations Smoothness and optimum parameter value for the entrainment rate. a) Root mean squared error and fits based on values eo=[8,16] and extrapolated over eo=[4,6]; b) derivative of the cost function computed at different levels of precision in the linearization (blue, green and black lines) and numerically using 1) the quadratic fit n the expression of the cost function (red line) and 2) only AGCM output (pink line). Note that the linearization determines the correct value of the minimum without using any information about model's output in that point: the quadratic fit is based on data

  9. Sensitivity of ENSO variability to Pacific freshwater flux adjustment in the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Kang, Xianbiao; Huang, Ronghui; Wang, Zhanggui; Zhang, Rong-Hua

    2014-09-01

    The effects of freshwater flux (FWF) on modulating ENSO have been of great interest in recent years. Large FWF bias is evident in Coupled General Circulation Models (CGCMs), especially over the tropical Pacific where large precipitation bias exists due to the so-called "double ITCZ" problem. By applying an empirical correction to FWF over the tropical Pacific, the sensitivity of ENSO variability is investigated using the new version (version 1.0) of the NCAR's Community Earth System Model (CESM1.0), which tends to overestimate the interannual variability of ENSO accompanied by large FWF into the ocean. In response to a small adjustment of FWF, interannual variability in CESM1.0 is reduced significantly, with the amplitude of FWF being reduced due to the applied adjustment part whose sign is always opposite to that of the original FWF field. Furthermore, it is illustrated that the interannual variability of precipitation weakens as a response to the reduced interannual variability of SST. Process analysis indicates that the interannual variability of SST is damped through a reduced FWF-salt-density-mixing-SST feedback, and also through a reduced SST-wind-thermocline feedback. These results highlight the importance of FWF in modulating ENSO, and thus should be adequately taken into account to improve the simulation of FWF in order to reduce the bias of ENSO simulations by CESM.

  10. Validation, replication, and sensitivity testing of Heckman-type selection models to adjust estimates of HIV prevalence.

    PubMed

    Clark, Samuel J; Houle, Brian

    2014-01-01

    A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS) found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys.

  11. Electro-optical parameters of bond polarizability model for aluminosilicates.

    PubMed

    Smirnov, Konstantin S; Bougeard, Daniel; Tandon, Poonam

    2006-04-06

    Electro-optical parameters (EOPs) of bond polarizability model (BPM) for aluminosilicate structures were derived from quantum-chemical DFT calculations of molecular models. The tensor of molecular polarizability and the derivatives of the tensor with respect to the bond length are well reproduced with the BPM, and the EOPs obtained are in a fair agreement with available experimental data. The parameters derived were found to be transferable to larger molecules. This finding suggests that the procedure used can be applied to systems with partially ionic chemical bonds. The transferability of the parameters to periodic systems was tested in molecular dynamics simulation of the polarized Raman spectra of alpha-quartz. It appeared that the molecular Si-O bond EOPs failed to reproduce the intensity of peaks in the spectra. This limitation is due to large values of the longitudinal components of the bond polarizability and its derivative found in the molecular calculations as compared to those obtained from periodic DFT calculations of crystalline silica polymorphs by Umari et al. (Phys. Rev. B 2001, 63, 094305). It is supposed that the electric field of the solid is responsible for the difference of the parameters. Nevertheless, the EOPs obtained can be used as an initial set of parameters for calculations of polarizability related characteristics of relevant systems in the framework of BPM.

  12. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  13. Estimating model parameters in nonautonomous chaotic systems using synchronization

    NASA Astrophysics Data System (ADS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-05-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.

  14. Soil-Related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  15. Multiscale Parameter Regionalization for consistent global water resources modelling

    NASA Astrophysics Data System (ADS)

    Wanders, Niko; Wood, Eric; Pan, Ming; Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc F. P.

    2017-04-01

    Due to an increasing demand for high- and hyper-resolution water resources information, it has become increasingly important to ensure consistency in model simulations across scales. This consistency can be ensured by scale independent parameterization of the land surface processes, even after calibration of the water resource model. Here, we use the Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010, WRR) to allow for a novel, spatially consistent, scale independent parameterization of the global water resource model PCR-GLOBWB. The implementation of MPR in PCR-GLOBWB allows for calibration at coarse resolutions and subsequent parameter transfer to the hyper-resolution. In this study, the model was calibrated at 50 km resolution over Europe and validation carried out at resolutions of 50 km, 10 km and 1 km. MPR allows for a direct transfer of the calibrated transfer function parameters across scales and we find that we can maintain consistent land-atmosphere fluxes across scales. Here we focus on the 2003 European drought and show that the new parameterization allows for high-resolution calibrated simulations of water resources during the drought. For example, we find a reduction from 29% to 9.4% in the percentile difference in the annual evaporative flux across scales when compared against default simulations. Soil moisture errors are reduced from 25% to 6.9%, clearly indicating the benefits of the MPR implementation. This new parameterization allows us to show more spatial detail in water resources simulations that are consistent across scales and also allow validation of discharge for smaller catchments, even with calibrations at a coarse 50 km resolution. The implementation of MPR allows for novel high-resolution calibrated simulations of a global water resources model, providing calibrated high-resolution model simulations with transferred parameter sets from coarse resolutions. The applied methodology can be transferred to other

  16. Realistic uncertainties on Hapke model parameters from photometric measurement

    NASA Astrophysics Data System (ADS)

    Schmidt, Frédéric; Fernando, Jennifer

    2015-11-01

    The single particle phase function describes the manner in which an average element of a granular material diffuses the light in the angular space usually with two parameters: the asymmetry parameter b describing the width of the scattering lobe and the backscattering fraction c describing the main direction of the scattering lobe. Hapke proposed a convenient and widely used analytical model to describe the spectro-photometry of granular materials. Using a compilation of the published data, Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) recently studied the relationship of b and c for natural examples and proposed the hockey stick relation (excluding b > 0.5 and c > 0.5). For the moment, there is no theoretical explanation for this relationship. One goal of this article is to study a possible bias due to the retrieval method. We expand here an innovative Bayesian inversion method in order to study into detail the uncertainties of retrieved parameters. On Emission Phase Function (EPF) data, we demonstrate that the uncertainties of the retrieved parameters follow the same hockey stick relation, suggesting that this relation is due to the fact that b and c are coupled parameters in the Hapke model instead of a natural phenomena. Nevertheless, the data used in the Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) compilation generally are full Bidirectional Reflectance Diffusion Function (BRDF) that are shown not to be subject to this artifact. Moreover, the Bayesian method is a good tool to test if the sampling geometry is sufficient to constrain the parameters (single scattering albedo, surface roughness, b, c , opposition effect). We performed sensitivity tests by mimicking various surface scattering properties and various single image-like/disk resolved image, EPF-like and BRDF-like geometric sampling conditions. The second goal of this article is to estimate the favorable geometric conditions for an accurate estimation of photometric parameters in order to provide

  17. Prediction under change: invariant model parameters in a varying environment

    NASA Astrophysics Data System (ADS)

    Schymanski, S. J.; Or, D.; Roderick, M. L.; Sivapalan, M.

    2012-04-01

    Hydrological understanding is commonly synthesised into complex mechanistic models, some of which have become "as inscrutable as nature itself" (Harte, 2002). Parameters for most models are estimated from past observations. This may result in an ill-posed problem with associated "equifinality" (Beven, 1993), in which the information content in calibration data is insufficient for distinguishing a suitable parameter set among all possible sets. Consequently, we are unable to identify the "correct" parameter set that produces the right results for the right reasons. Incorporation of new process knowledge into a model adds new parameters that exacerbate the equifinality problem. Hence improved process understanding has not necessarily translated into improved models nor contributed to better predictions. Prediction under change confronts us with additional challenges: 1. Varying boundary conditions: Projections into the future can no longer be guided by observations in the past to the same degree as they could when the boundary conditions were considered stationary. 2. Ecohydrological adaptation: Common model parameters related to vegetation properties (e.g. canopy conductance, rooting depths) cannot be assumed invariant, as vegetation dynamically adapts to its environment. 3. No analog conditions for model evaluation: Climate change and in particular rising atmospheric CO2 concentrations will lead to conditions that cannot be found anywhere on Earth at present. Therefore it is doubtful whether the ability of a hydrological model to reproduce the past is indicative of its trustworthiness for predicting the future. We propose that optimality theory can help addressing some of the above challenges. Optimality theory submits that natural systems self-optimise to attain certain goal functions (or "objective functions"). Optimality principles allow an independent prediction of system properties that would otherwise require direct observations or calibration. The resulting

  18. Risk-adjusted capitation funding models for chronic disease in Australia: alternatives to casemix funding.

    PubMed

    Antioch, K M; Walsh, M K

    2002-01-01

    Under Australian casemix funding arrangements that use Diagnosis-Related Groups (DRGs) the average price is policy based, not benchmarked. Cost weights are too low for State-wide chronic disease services. Risk-adjusted Capitation Funding Models (RACFM) are feasible alternatives. A RACFM was developed for public patients with cystic fibrosis treated by an Australian Health Maintenance Organization (AHMO). Adverse selection is of limited concern since patients pay solidarity contributions via Medicare levy with no premium contributions to the AHMO. Sponsors paying premium subsidies are the State of Victoria and the Federal Government. Cost per patient is the dependent variable in the multiple regression. Data on DRG 173 (cystic fibrosis) patients were assessed for heteroskedasticity, multicollinearity, structural stability and functional form. Stepwise linear regression excluded non-significant variables. Significant variables were 'emergency' (1276.9), 'outlier' (6377.1), 'complexity' (3043.5), 'procedures' (317.4) and the constant (4492.7) (R(2)=0.21, SE=3598.3, F=14.39, Prob<0.0001. Regression coefficients represent the additional per patient costs summed to the base payment (constant). The model explained 21% of the variance in cost per patient. The payment rate is adjusted by a best practice annual admission rate per patient. The model is a blended RACFM for in-patient, out-patient, Hospital In The Home, Fee-For-Service Federal payments for drugs and medical services; lump sum lung transplant payments and risk sharing through cost (loss) outlier payments. State and Federally funded home and palliative services are 'carved out'. The model, which has national application via Coordinated Care Trials and by Australian States for RACFMs may be instructive for Germany, which plans to use Australian DRGs for casemix funding. The capitation alternative for chronic disease can improve equity, allocative efficiency and distributional justice. The use of Diagnostic Cost

  19. Lower-order effects adjustment in quantitative traits model-based multifactor dimensionality reduction.

    PubMed

    Mahachie John, Jestinah M; Cattaert, Tom; Lishout, François Van; Gusareva, Elena S; Steen, Kristel Van

    2012-01-01

    Identifying gene-gene interactions or gene-environment interactions in studies of human complex diseases remains a big challenge in genetic epidemiology. An additional challenge, often forgotten, is to account for important lower-order genetic effects. These may hamper the identification of genuine epistasis. If lower-order genetic effects contribute to the genetic variance of a trait, identified statistical interactions may simply be due to a signal boost of these effects. In this study, we restrict attention to quantitative traits and bi-allelic SNPs as genetic markers. Moreover, our interaction study focuses on 2-way SNP-SNP interactions. Via simulations, we assess the performance of different corrective measures for lower-order genetic effects in Model-Based Multifactor Dimensionality Reduction epistasis detection, using additive and co-dominant coding schemes. Performance is evaluated in terms of power and familywise error rate. Our simulations indicate that empirical power estimates are reduced with correction of lower-order effects, likewise familywise error rates. Easy-to-use automatic SNP selection procedures, SNP selection based on "top" findings, or SNP selection based on p-value criterion for interesting main effects result in reduced power but also almost zero false positive rates. Always accounting for main effects in the SNP-SNP pair under investigation during Model-Based Multifactor Dimensionality Reduction analysis adequately controls false positive epistasis findings. This is particularly true when adopting a co-dominant corrective coding scheme. In conclusion, automatic search procedures to identify lower-order effects to correct for during epistasis screening should be avoided. The same is true for procedures that adjust for lower-order effects prior to Model-Based Multifactor Dimensionality Reduction and involve using residuals as the new trait. We advocate using "on-the-fly" lower-order effects adjusting when screening for SNP-SNP interactions

  20. Optimising muscle parameters in musculoskeletal models using Monte Carlo simulation.

    PubMed

    Reed, Erik B; Hanson, Andrea M; Cavanagh, Peter R

    2015-01-01

    The use of musculoskeletal simulation software has become a useful tool for modelling joint and muscle forces during human activity, including in reduced gravity because direct experimentation is difficult. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler™ (San Clemente, CA, USA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces but no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. The rectus femoris was predicted to peak at 60.1% activation in the same test case compared to 19.2% activation using default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  1. Inversion of canopy reflectance models for estimation of vegetation parameters

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.

    1987-01-01

    One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.

  2. Parameter identification for a suction-dependent plasticity model

    NASA Astrophysics Data System (ADS)

    Simoni, L.; Schrefler, B. A.

    2001-03-01

    In this paper, the deterministic parameter identification procedure proposed in a companion paper is applied to suction-dependent elasto-plasticity problems. A mathematical model for such type of problems is firstly presented, then it is applied to the parameter identification using laboratory data. The identification procedure is applied in a second example to exploitation of a gas reservoir. The effects of the extraction of underground fluids appear during and after quite long periods of time and strongly condition the decision to profit or not of the natural resources. Identification procedures can be very useful tools for reliable long-term predictions.

  3. Investigation of RADTRAN Stop Model input parameters for truck stops

    SciTech Connect

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-03-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops.

  4. Study of Correlationships between Main Ionospheric Parameters by Stochastic Modeling

    NASA Astrophysics Data System (ADS)

    Podolská, K.; Truhlík, V.; Třísková, L.

    2012-04-01

    We employ multivariate statistic methods applied to long period daily observational data for find out time shifts between fundamental ionospheric parameters. The F2 layer critical frequency (foF2), Kp index, and solar radiation flux at 10.7cm (F10.7 index), and relative sunspot number R as indicators of phase of solar cycle as studied time series was used. As a paralel observed series was utilized E10.7 (Solar EUV index based on F10.7) and TEC series. The foF2 data series measured from mid-latitude ionosonde stations was used. For investigation of relationships between time and geographic variations of parameters studied we employ the method of the conditional independence graphical models (CIG) which describing and transparently representing structure of dependence relationships in the time series. This method appears useful for studying the correlationships between fundamental ionospheric parameters and can be applied even in the case when classical parametric methods are not convenient, e.g. for non-continuous time series etc. We consider the structure of pairwise dependence of its individual components, looking for the maximum likelihood estimate of the variance matrix under conditions given by the graphical model. The CIG method allowed implementation of additional time series variables into previous model. Simultaneously we used clasical stochastic model. The data best fit relationship model is computed.

  5. Spherical Harmonics Functions Modelling of Meteorological Parameters in PWV Estimation

    NASA Astrophysics Data System (ADS)

    Deniz, Ilke; Mekik, Cetin; Gurbuz, Gokhan

    2016-08-01

    Aim of this study is to derive temperature, pressure and humidity observations using spherical harmonics modelling and to interpolate for the derivation of precipitable water vapor (PWV) of TUSAGA-Active stations in the test area encompassing 38.0°-42.0° northern latitudes and 28.0°-34.0° eastern longitudes of Turkey. In conclusion, the meteorological parameters computed by using GNSS observations for the study area have been modelled with a precision of ±1.74 K in temperature, ±0.95 hPa in pressure and ±14.88 % in humidity. Considering studies on the interpolation of meteorological parameters, the precision of temperature and pressure models provide adequate solutions. This study funded by the Scientific and Technological Research Council of Turkey (TUBITAK) (The Estimation of Atmospheric Water Vapour with GPS Project, Project No: 112Y350).

  6. Prediction of mortality rates using a model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  7. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  8. Estimating demographic parameters using hidden process dynamic models.

    PubMed

    Gimenez, Olivier; Lebreton, Jean-Dominique; Gaillard, Jean-Michel; Choquet, Rémi; Pradel, Roger

    2012-12-01

    Structured population models are widely used in plant and animal demographic studies to assess population dynamics. In matrix population models, populations are described with discrete classes of individuals (age, life history stage or size). To calibrate these models, longitudinal data are collected at the individual level to estimate demographic parameters. However, several sources of uncertainty can complicate parameter estimation, such as imperfect detection of individuals inherent to monitoring in the wild and uncertainty in assigning a state to an individual. Here, we show how recent statistical models can help overcome these issues. We focus on hidden process models that run two time series in parallel, one capturing the dynamics of the true states and the other consisting of observations arising from these underlying possibly unknown states. In a first case study, we illustrate hidden Markov models with an example of how to accommodate state uncertainty using Frequentist theory and maximum likelihood estimation. In a second case study, we illustrate state-space models with an example of how to estimate lifetime reproductive success despite imperfect detection, using a Bayesian framework and Markov Chain Monte Carlo simulation. Hidden process models are a promising tool as they allow population biologists to cope with process variation while simultaneously accounting for observation error. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    NASA Technical Reports Server (NTRS)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and

  10. Comparison of Cone Model Parameters for Halo Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Na, Hyeonock; Moon, Y.-J.; Jang, Soojeong; Lee, Kyoung-Sun; Kim, Hae-Yeon

    2013-11-01

    Halo coronal mass ejections (HCMEs) are a major cause of geomagnetic storms, hence their three-dimensional structures are important for space weather. We compare three cone models: an elliptical-cone model, an ice-cream-cone model, and an asymmetric-cone model. These models allow us to determine three-dimensional parameters of HCMEs such as radial speed, angular width, and the angle [ γ] between sky plane and cone axis. We compare these parameters obtained from three models using 62 HCMEs observed by SOHO/LASCO from 2001 to 2002. Then we obtain the root-mean-square (RMS) error between the highest measured projection speeds and their calculated projection speeds from the cone models. As a result, we find that the radial speeds obtained from the models are well correlated with one another ( R > 0.8). The correlation coefficients between angular widths range from 0.1 to 0.48 and those between γ-values range from -0.08 to 0.47, which is much smaller than expected. The reason may be the different assumptions and methods. The RMS errors between the highest measured projection speeds and the highest estimated projection speeds of the elliptical-cone model, the ice-cream-cone model, and the asymmetric-cone model are 376 km s-1, 169 km s-1, and 152 km s-1. We obtain the correlation coefficients between the location from the models and the flare location ( R > 0.45). Finally, we discuss strengths and weaknesses of these models in terms of space-weather application.

  11. Enhancing debris flow modeling parameters integrating Bayesian networks

    NASA Astrophysics Data System (ADS)

    Graf, C.; Stoffel, M.; Grêt-Regamey, A.

    2009-04-01

    Applied debris-flow modeling requires suitably constraint input parameter sets. Depending on the used model, there is a series of parameters to define before running the model. Normally, the data base describing the event, the initiation conditions, the flow behavior, the deposition process and mainly the potential range of possible debris flow events in a certain torrent is limited. There are only some scarce places in the world, where we fortunately can find valuable data sets describing event history of debris flow channels delivering information on spatial and temporal distribution of former flow paths and deposition zones. Tree-ring records in combination with detailed geomorphic mapping for instance provide such data sets over a long time span. Considering the significant loss potential associated with debris-flow disasters, it is crucial that decisions made in regard to hazard mitigation are based on a consistent assessment of the risks. This in turn necessitates a proper assessment of the uncertainties involved in the modeling of the debris-flow frequencies and intensities, the possible run out extent, as well as the estimations of the damage potential. In this study, we link a Bayesian network to a Geographic Information System in order to assess debris-flow risk. We identify the major sources of uncertainty and show the potential of Bayesian inference techniques to improve the debris-flow model. We model the flow paths and deposition zones of a highly active debris-flow channel in the Swiss Alps using the numerical 2-D model RAMMS. Because uncertainties in run-out areas cause large changes in risk estimations, we use the data of flow path and deposition zone information of reconstructed debris-flow events derived from dendrogeomorphological analysis covering more than 400 years to update the input parameters of the RAMMS model. The probabilistic model, which consistently incorporates this available information, can serve as a basis for spatial risk

  12. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    NASA Astrophysics Data System (ADS)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  13. Model-based parameter estimation in electromagnetic computer modeling

    NASA Astrophysics Data System (ADS)

    Demarest, Kenneth R.

    1989-04-01

    Modeling in Computational Electromagnetics (CEM) can be a numerically demanding exercise. There are essentially two factors that contribute to this situation. One is the need to describe the propagation of the electromagnetic field via the Maxwell curl equations, Green's function, mode expansions, or ray and geometrical optics. It is in this part of the problem that a source-field relationship is quantitatively developed. The other is the subsequent need to invert the source-field relationship to proceed from prescribed existing fields and known sources to the induced sources that result and the fields they consequently produce. A moment-method solution, based on an integral equation formulation, embodies both of these factors. There are basically two paths by which the computer times involved in CEM applications might be reduced. One would be the development of alternate formulations that reduce the time required for either of the activities listed above, or that eliminate the need for it completely. The geometrical theory of diffraction is one example of this path. The other would be the development of more efficient numerical approaches for implementing the moment-method model. Under this contract we have investigated several means of reducing the computation time involved in the applications of integral equation, moment-method modeling.

  14. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  15. SU-E-T-247: Multi-Leaf Collimator Model Adjustments Improve Small Field Dosimetry in VMAT Plans

    SciTech Connect

    Young, L; Yang, F

    2014-06-01

    Purpose: The Elekta beam modulator linac employs a 4-mm micro multileaf collimator (MLC) backed by a fixed jaw. Out-of-field dose discrepancies between treatment planning system (TPS) calculations and output water phantom measurements are caused by the 1-mm leaf gap required for all moving MLCs in a VMAT arc. In this study, MLC parameters are optimized to improve TPS out-of-field dose approximations. Methods: Static 2.4 cm square fields were created with a 1-mm leaf gap for MLCs that would normally park behind the jaw. Doses in the open field and leaf gap were measured with an A16 micro ion chamber and EDR2 film for comparison with corresponding point doses in the Pinnacle TPS. The MLC offset table and tip radius were adjusted until TPS point doses agreed with photon measurements. Improvements to the beam models were tested using static arcs consisting of square fields ranging from 1.6 to 14.0 cm, with 45° collimator rotation, and 1-mm leaf gap to replicate VMAT conditions. Gamma values for the 3-mm distance, 3% dose difference criteria were evaluated using standard QA procedures with a cylindrical detector array. Results: The best agreement in point doses within the leaf gap and open field was achieved by offsetting the default rounded leaf end table by 0.1 cm and adjusting the leaf tip radius to 13 cm. Improvements in TPS models for 6 and 10 MV photon beams were more significant for smaller field sizes 3.6 cm or less where the initial gamma factors progressively increased as field size decreased, i.e. for a 1.6cm field size, the Gamma increased from 56.1% to 98.8%. Conclusion: The MLC optimization techniques developed will achieve greater dosimetric accuracy in small field VMAT treatment plans for fixed jaw linear accelerators. Accurate predictions of dose to organs at risk may reduce adverse effects of radiotherapy.

  16. Analysis of case-parent trios for imprinting effect using a loglinear model with adjustment for sex-of-parent-specific transmission ratio distortion.

    PubMed

    Huang, Lam Opal; Infante-Rivard, Claire; Labbe, Aurélie

    2017-08-01

    Transmission ratio distortion (TRD) is a phenomenon where parental transmission of disease allele to the child does not follow the Mendelian inheritance ratio. TRD occurs in a sex-of-parent-specific or non-sex-of-parent-specific manner. An offset computed from the transmission probability of the minor allele in control-trios can be added to the loglinear model to adjust for TRD. Adjusting the model removes the inflation in the genotype relative risk (RR) estimate and Type 1 error introduced by non-sex-of-parent-specific TRD. We now propose to further extend this model to estimate an imprinting parameter. Some evidence suggests that more than 1% of all mammalian genes are imprinted. In the presence of imprinting, for example, the offspring inheriting an over-transmitted disease allele from the parent with a higher expression level in a neighboring gene is over-represented in the sample. TRD mechanisms such as meiotic drive and gametic competition occur in a sex-of-parent-specific manner. Therefore, sex-of-parent-specific TRD (ST) leads to over-representation of maternal or paternal alleles in the affected child. As a result, ST may bias the imprinting effect when present in the sample. We propose a sex-of-parent-specific transmission offset in adjusting the loglinear model to account for ST. This extended model restores the correct RR estimates for child and imprinting effects, adjusts for inflation in Type 1 error, and improves performance on sensitivity and specificity compared to the original model without ST offset. We conclude that to correctly interpret the association signal of an imprinting effect, adjustment for ST is necessary to ensure valid conclusions.

  17. Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1996-01-01

    Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.

  18. Two models to compute an adjusted Green Vegetation Fraction taking into account the spatial variability of soil NDVI

    NASA Astrophysics Data System (ADS)

    Montandon, L. M.; Small, E.

    2008-12-01

    The green vegetation fraction (Fg) is an important climate and hydrologic model parameter. The commonly- used Fg model is a simple linear mixing of two NDVI end-members: bare soil NDVI (NDVIo) and full vegetation NDVI (NDVI∞). NDVI∞ is generally set as a percentile of the historical maximum NDVI for each land cover. This approach works well for areas where Fg reaches full cover (100%). Because many biomes do not reach Fg=0, however, NDVIo is often determined as a single invariant value for all land cover types. In general, it is selected among the lowest NDVI observed over bare or desert areas, yielding NDVIo close to zero. There are two issues with this approach: large-scale variability of soil NDVI is ignored and observations on a wide range of soils show that soil NDVI is often larger. Here we introduce and test two new approaches to compute Fg that takes into account the spatial variability of soil NDVI. The first approach uses a global soil NDVI database and time series of MODIS NDVI data over the conterminous United States to constrain possible soil NDVI values over each pixel. Fg is computed using a subset of the soils database that respects the linear mixing model condition NDVIo≤NDVIh, where NDVIh is the pixel historical minimum. The second approach uses an empirical soil NDVI model that combines information of soil organic matter content and texture to infer soil NDVI. The U.S. General Soil Map (STATSGO2) database is used as input for spatial soil properties. Using in situ measurements of soil NDVI from sites that span a range of land cover types, we test both models and compare their performance to the standard Fg model. We show that our models adjust the temporal Fg estimates by 40-90% depending on the land cover type and amplitude of the seasonal NDVI signal. Using MODIS NDVI and soil maps over the conterminous U.S., we also study the spatial distribution of Fg adjustments in February and June 2008. We show that the standard Fg method

  19. Social support and psychological adjustment among Latinas with arthritis: a test of a theoretical model.

    PubMed

    Abraído-Lanza, Ana F

    2004-06-01

    Among people coping with chronic illness, tangible social support sometimes has unintended negative consequences on the recipient's psychological health. Identity processes may help explain these effects. Individuals derive self-worth and a sense of competence by enacting social roles that are central to the self-concept. This study tested a model drawing from some of these theoretical propositions. The central hypothesis was that tangible support in fulfilling a highly valued role undermines self-esteem and a sense of self-efficacy, which, in turn, affect psychological adjustment. Structured interviews were conducted with 98 Latina women with arthritis who rated the homemaker identity as being of central importance to the self-concept. A path analysis indicated that, contrary to predictions, tangible housework support was related to less psychological distress. Emotional support predicted greater psychological well-being. These relationships were not mediated by self-esteem or self-efficacy. Qualitative data revealed that half of the sample expressed either ambivalent or negative feelings about receiving housework support. Results may reflect social and cultural norms concerning the types of support that are helpful and appropriate from specific support providers. Future research should consider the cultural meaning and normative context of the support transaction. This study contributes to scarce literatures on the mechanisms that mediate the relationship between social support and adjustment, as well as illness and psychosocial adaptation among Latina women with chronic illness.

  20. Social Support and Psychological Adjustment Among Latinas With Arthritis: A Test of a Theoretical Model

    PubMed Central

    Abraído-Lanza, Ana F.

    2013-01-01

    Background Among people coping with chronic illness, tangible social support sometimes has unintended negative consequences on the recipient’s psychological health. Identity processes may help explain these effects. Individuals derive self-worth and a sense of competence by enacting social roles that are central to the self-concept. Purpose This study tested a model drawing from some of these theoretical propositions. The central hypothesis was that tangible support in fulfilling a highly valued role undermines self-esteem and a sense of self-efficacy, which, in turn, affect psychological adjustment Methods Structured interviews were conducted with 98 Latina women with arthritis who rated the homemaker identity as being of central importance to the self-concept. Results A path analysis indicated that, contrary to predictions, tangible housework support was related to less psychological distress. Emotional support predicted greater psychological well-being. These relationships were not mediated by self-esteem or self-efficacy. Qualitative data revealed that half of the sample expressed either ambivalent or negative feelings about receiving housework support Conclusions Results may reflect social and cultural norms concerning the types of support that are helpful and appropriate from specific support providers. Future research should consider the cultural meaning and normative context of the support transaction. This study contributes to scarce literatures on the mechanisms that mediate the relationship between social support and adjustment, as well as illness and psychosocial adaptation among Latina women with chronic illness. PMID:15184092

  1. Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.

    2014-12-01

    This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.

  2. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  3. Race and Gender Influences on Adjustment in Early Adolescence: Investigation of an Integrative Model.

    ERIC Educational Resources Information Center

    DuBois, David L.; Burk-Braxton, Carol; Swenson, Lance P.; Tevendale, Heather D.; Hardesty, Jennifer L.

    2002-01-01

    Investigated the influence of racial and gender discrimination and difficulties on adolescent adjustment. Found that discrimination and hassles contribute to a general stress context which in turn influences emotional and behavioral problems in adjustment, while racial and gender identity positively affect self-esteem and thus adjustment. Revealed…

  4. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data.

  5. A data-driven model of present-day glacial isostatic adjustment in North America

    NASA Astrophysics Data System (ADS)

    Simon, Karen; Riva, Riccardo

    2016-04-01

    Geodetic measurements of gravity change and vertical land motion are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) via least-squares inversion. The result is an updated model of present-day GIA wherein the final predicted signal is informed by both observational data with realistic errors, and prior knowledge of GIA inferred from forward models. This method and other similar techniques have been implemented within a limited but growing number of GIA studies (e.g., Hill et al. 2010). The combination method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. Here, we show the results of using the combination approach to predict present-day rates of GIA in North America through the incorporation of both GPS-measured vertical land motion rates and GRACE-measured gravity observatio