Science.gov

Sample records for adjustable model parameters

  1. Impacts of Parameters Adjustment of Relativistic Mean Field Model on Neutron Star Properties

    NASA Astrophysics Data System (ADS)

    Kasmudin; Sulaksono, A.

    Analysis of the parameters adjustment effects in isovector as well as in isoscalar sectors of effective field based relativistic mean field (E-RMF) model in the symmetric nuclear matter and neutron-rich matter properties has been performed. The impacts of the adjustment on slowly rotating neutron star are systematically investigated. It is found that the mass-radius relation obtained from adjusted parameter set G2** is compatible not only with neutron stars masses from 4U 0614+09 and 4U 1636-536, but also with the ones from thermal radiation measurement in RX J1856 and with the radius range of canonical neutron star of X7 in 47 Tuc, respectively. It is also found that the moment inertia of PSR J073-3039A and the strain amplitude of gravitational wave at the Earth's vicinity of PSR J0437-4715 as predicted by the E-RMF parameter sets used are in reasonable agreement with the extracted constraints of these observations from isospin diffusion data.

  2. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    NASA Astrophysics Data System (ADS)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  3. Optical phantoms with adjustable subdiffusive scattering parameters.

    PubMed

    Krauter, Philipp; Nothelfer, Steffen; Bodenschatz, Nico; Simon, Emanuel; Stocker, Sabrina; Foschum, Florian; Kienle, Alwin

    2015-10-01

    A new epoxy-resin-based optical phantom system with adjustable subdiffusive scattering parameters is presented along with measurements of the intrinsic absorption, scattering, fluorescence, and refractive index of the matrix material. Both an aluminium oxide powder and a titanium dioxide dispersion were used as scattering agents and we present measurements of their scattering and reduced scattering coefficients. A method is theoretically described for a mixture of both scattering agents to obtain continuously adjustable anisotropy values g between 0.65 and 0.9 and values of the phase function parameter γ in the range of 1.4 to 2.2. Furthermore, we show absorption spectra for a set of pigments that can be added to achieve particular absorption characteristics. By additional analysis of the aging, a fully characterized phantom system is obtained with the novelty of g and γ parameter adjustment. PMID:26473589

  4. Resonance Parameter Adjustment Based on Integral Experiments

    DOE PAGES

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; Forget, Benoit

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less

  5. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  6. Adaptation of model proteins from cold to hot environments involves continuous and small adjustments of average parameters related to amino acid composition.

    PubMed

    De Vendittis, Emmanuele; Castellano, Immacolata; Cotugno, Roberta; Ruocco, Maria Rosaria; Raimo, Gennaro; Masullo, Mariorosario

    2008-01-01

    The growth temperature adaptation of six model proteins has been studied in 42 microorganisms belonging to eubacterial and archaeal kingdoms, covering optimum growth temperatures from 7 to 103 degrees C. The selected proteins include three elongation factors involved in translation, the enzymes glyceraldehyde-3-phosphate dehydrogenase and superoxide dismutase, the cell division protein FtsZ. The common strategy of protein adaptation from cold to hot environments implies the occurrence of small changes in the amino acid composition, without altering the overall structure of the macromolecule. These continuous adjustments were investigated through parameters related to the amino acid composition of each protein. The average value per residue of mass, volume and accessible surface area allowed an evaluation of the usage of bulky residues, whereas the average hydrophobicity reflected that of hydrophobic residues. The specific proportion of bulky and hydrophobic residues in each protein almost linearly increased with the temperature of the host microorganism. This finding agrees with the structural and functional properties exhibited by proteins in differently adapted sources, thus explaining the great compactness or the high flexibility exhibited by (hyper)thermophilic or psychrophilic proteins, respectively. Indeed, heat-adapted proteins incline toward the usage of heavier-size and more hydrophobic residues with respect to mesophiles, whereas the cold-adapted macromolecules show the opposite behavior with a certain preference for smaller-size and less hydrophobic residues. An investigation on the different increase of bulky residues along with the growth temperature observed in the six model proteins suggests the relevance of the possible different role and/or structure organization played by protein domains. The significance of the linear correlations between growth temperature and parameters related to the amino acid composition improved when the analysis was

  7. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    SciTech Connect

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  8. Europium Luminescence: Electronic Densities and Superdelocalizabilities for a Unique Adjustment of Theoretical Intensity Parameters

    PubMed Central

    Dutra, José Diogo L.; Lima, Nathalia B. D.; Freire, Ricardo O.; Simas, Alfredo M.

    2015-01-01

    We advance the concept that the charge factors of the simple overlap model and the polarizabilities of Judd-Ofelt theory for the luminescence of europium complexes can be effectively and uniquely modeled by perturbation theory on the semiempirical electronic wave function of the complex. With only three adjustable constants, we introduce expressions that relate: (i) the charge factors to electronic densities, and (ii) the polarizabilities to superdelocalizabilities that we derived specifically for this purpose. The three constants are then adjusted iteratively until the calculated intensity parameters, corresponding to the 5D0→7F2 and 5D0→7F4 transitions, converge to the experimentally determined ones. This adjustment yields a single unique set of only three constants per complex and semiempirical model used. From these constants, we then define a binary outcome acceptance attribute for the adjustment, and show that when the adjustment is acceptable, the predicted geometry is, in average, closer to the experimental one. An important consequence is that the terms of the intensity parameters related to dynamic coupling and electric dipole mechanisms will be unique. Hence, the important energy transfer rates will also be unique, leading to a single predicted intensity parameter for the 5D0→7F6 transition. PMID:26329420

  9. 40 CFR 86.1833-01 - Adjustable parameters.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... return to its original shape after the force is removed (plastic or spring steel materials); (D) In the... bimetal spring, the plate covering the bimetal spring is riveted or welded in place, or held in place with... regardless of additional forces or torques applied to the adjustment; (C) The manufacturer demonstrates...

  10. 40 CFR 86.1833-01 - Adjustable parameters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... return to its original shape after the force is removed (plastic or spring steel materials); (D) In the... bimetal spring, the plate covering the bimetal spring is riveted or welded in place, or held in place with... regardless of additional forces or torques applied to the adjustment; (C) The manufacturer demonstrates...

  11. 40 CFR 86.1833-01 - Adjustable parameters.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... return to its original shape after the force is removed (plastic or spring steel materials); (D) In the... bimetal spring, the plate covering the bimetal spring is riveted or welded in place, or held in place with... regardless of additional forces or torques applied to the adjustment; (C) The manufacturer demonstrates...

  12. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... achieved the same percent reduction in NOX emissions from the optimal calibration would be considered to be... must specify in the maintenance instructions how to adjust the engines to achieve emission performance... performance that could be achieved in the absence of emission standards (i.e., the calibration that result...

  13. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... achieved the same percent reduction in NOX emissions from the optimal calibration would be considered to be... must specify in the maintenance instructions how to adjust the engines to achieve emission performance... performance that could be achieved in the absence of emission standards (i.e., the calibration that result...

  14. Parameters and error of a theoretical model

    SciTech Connect

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.

  15. 40 CFR 86.1833-01 - Adjustable parameters.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of settings other than the manufacturer's recommended setting on vehicle performance characteristics including emission characteristics. (2)(i) A parameter may be determined to be adequately inaccessible or... return to its original shape after the force is removed (plastic or spring steel materials); (D) In...

  16. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection.

    PubMed

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-01-01

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671

  17. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection

    PubMed Central

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-01-01

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671

  18. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection.

    PubMed

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-08-28

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.

  19. 40 CFR 91.112 - Requirement of certification-adjustable parameters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Requirement of certification-adjustable parameters. 91.112 Section 91.112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... adjustable range during certification, production line testing, selective enforcement auditing or any...

  20. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    PubMed

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  1. Saturation-power enhancement of a free-electron laser amplifier through parameters adjustment

    NASA Astrophysics Data System (ADS)

    Ji, Yu-Pin; Xu, Y.-G.; Wang, S.-J.; Xu, J.-Y.; Liu, X.-X.; Zhang, S.-C.

    2015-06-01

    Saturation-power enhancement of a free-electron laser (FEL) amplifier by using tapered wiggler amplitude is based on the postponement of the saturation length of the uniform wiggler. In this paper, we qualitatively and quantitatively demonstrate that the saturation-power enhancement can be approached by means of the parameters adjustment, which is comparable to that by using a tapered wiggler. Compared to the method by tapering the wiggler amplitude, the method of parameters adjustment substantially shortens the saturation length, which is favorable to cutting down the manufacture and operation costs of the device.

  2. Parameter uncertainty for ASP models

    SciTech Connect

    Knudsen, J.K.; Smith, C.L.

    1995-10-01

    The steps involved to incorporate parameter uncertainty into the Nuclear Regulatory Commission (NRC) accident sequence precursor (ASP) models is covered in this paper. Three different uncertainty distributions (i.e., lognormal, beta, gamma) were evaluated to Determine the most appropriate distribution. From the evaluation, it was Determined that the lognormal distribution will be used for the ASP models uncertainty parameters. Selection of the uncertainty parameters for the basic events is also discussed. This paper covers the process of determining uncertainty parameters for the supercomponent basic events (i.e., basic events that are comprised of more than one component which can have more than one failure mode) that are utilized in the ASP models. Once this is completed, the ASP model is ready to be utilized to propagate parameter uncertainty for event assessments.

  3. Examining the Correlation between Objective Injury Parameters, Personality Traits, and Adjustment Measures among Burn Victims

    PubMed Central

    Weissman, Oren; Domniz, Noam; Petashnick, Yoel R.; Gilboa, Dalia; Raviv, Tal; Barzilai, Liran; Farber, Nimrod; Harats, Moti; Winkler, Eyal; Haik, Josef

    2015-01-01

    Background: Burn victims experience immense physical and mental hardship during their process of rehabilitation and regaining functionality. We examined different objective burn-related factors as well as psychological ones, in the form of personality traits that may affect the rehabilitation process and its outcome. Objective: To assess the influence and correlation of specific personality traits and objective injury-related parameters on the adjustment of burn victims post-injury. Methods: Sixty-two male patients admitted to our burn unit due to burn injuries were compared with 36 healthy male individuals by use of questionnaires to assess each group’s psychological adjustment parameters. Multivariate and hierarchical regression analysis was conducted to identify differences between the groups. Results: A significant negative correlation was found between the objective burn injury severity (e.g., total body surface area and burn depth) and the adjustment of burn victims (p < 0.05, p < 0.001, Table 3). Moreover, patients more severely injured tend to be more neurotic (p < 0.001), and less extroverted and agreeable (p < 0.01, Table 4). Conclusion: Extroverted burn victims tend to adjust better to their post-injury life while the neurotic patients tend to have difficulties adjusting. This finding may suggest new tools for early identification of maladjustment-prone patients and therefore provide them with better psychological support in a more dedicated manner. PMID:25874193

  4. Risk-Adjusted Models for Adverse Obstetric Outcomes and Variation in Risk Adjusted Outcomes Across Hospitals

    PubMed Central

    Bailit, Jennifer L.; Grobman, William A.; Rice, Madeline Murguia; Spong, Catherine Y.; Wapner, Ronald J.; Varner, Michael W.; Thorp, John M.; Leveno, Kenneth J.; Caritis, Steve N.; Shubert, Phillip J.; Tita, Alan T. N.; Saade, George; Sorokin, Yoram; Rouse, Dwight J.; Blackwell, Sean C.; Tolosa, Jorge E.; Van Dorsten, J. Peter

    2014-01-01

    Objective Regulatory bodies and insurers evaluate hospital quality using obstetrical outcomes, however meaningful comparisons should take pre-existing patient characteristics into account. Furthermore, if risk-adjusted outcomes are consistent within a hospital, fewer measures and resources would be needed to assess obstetrical quality. Our objective was to establish risk-adjusted models for five obstetric outcomes and assess hospital performance across these outcomes. Study Design A cohort study of 115,502 women and their neonates born in 25 hospitals in the United States between March 2008 and February 2011. Hospitals were ranked according to their unadjusted and risk-adjusted frequency of venous thromboembolism, postpartum hemorrhage, peripartum infection, severe perineal laceration, and a composite neonatal adverse outcome. Correlations between hospital risk-adjusted outcome frequencies were assessed. Results Venous thromboembolism occurred too infrequently (0.03%, 95% CI 0.02% – 0.04%) for meaningful assessment. Other outcomes occurred frequently enough for assessment (postpartum hemorrhage 2.29% (95% CI 2.20–2.38), peripartum infection 5.06% (95% CI 4.93–5.19), severe perineal laceration at spontaneous vaginal delivery 2.16% (95% CI 2.06–2.27), neonatal composite 2.73% (95% CI 2.63–2.84)). Although there was high concordance between unadjusted and adjusted hospital rankings, several individual hospitals had an adjusted rank that was substantially different (as much as 12 rank tiers) than their unadjusted rank. None of the correlations between hospital adjusted outcome frequencies was significant. For example, the hospital with the lowest adjusted frequency of peripartum infection had the highest adjusted frequency of severe perineal laceration. Conclusions Evaluations based on a single risk-adjusted outcome cannot be generalized to overall hospital obstetric performance. PMID:23891630

  5. An interface model for dosage adjustment connects hematotoxicity to pharmacokinetics.

    PubMed

    Meille, C; Iliadis, A; Barbolosi, D; Frances, N; Freyer, G

    2008-12-01

    When modeling is required to describe pharmacokinetics and pharmacodynamics simultaneously, it is difficult to link time-concentration profiles and drug effects. When patients are under chemotherapy, despite the huge amount of blood monitoring numerations, there is a lack of exposure variables to describe hematotoxicity linked with the circulating drug blood levels. We developed an interface model that transforms circulating pharmacokinetic concentrations to adequate exposures, destined to be inputs of the pharmacodynamic process. The model is materialized by a nonlinear differential equation involving three parameters. The relevance of the interface model for dosage adjustment is illustrated by numerous simulations. In particular, the interface model is incorporated into a complex system including pharmacokinetics and neutropenia induced by docetaxel and by cisplatin. Emphasis is placed on the sensitivity of neutropenia with respect to the variations of the drug amount. This complex system including pharmacokinetic, interface, and pharmacodynamic hematotoxicity models is an interesting tool for analysis of hematotoxicity induced by anticancer agents. The model could be a new basis for further improvements aimed at incorporating new experimental features. PMID:19107581

  6. Reductions in particulate and NO(x) emissions by diesel engine parameter adjustments with HVO fuel.

    PubMed

    Happonen, Matti; Heikkilä, Juha; Murtonen, Timo; Lehto, Kalle; Sarjovaara, Teemu; Larmi, Martti; Keskinen, Jorma; Virtanen, Annele

    2012-06-01

    Hydrotreated vegetable oil (HVO) diesel fuel is a promising biofuel candidate that can complement or substitute traditional diesel fuel in engines. It has been already reported that by changing the fuel from conventional EN590 diesel to HVO decreases exhaust emissions. However, as the fuels have certain chemical and physical differences, it is clear that the full advantage of HVO cannot be realized unless the engine is optimized for the new fuel. In this article, we studied how much exhaust emissions can be reduced by adjusting engine parameters for HVO. The results indicate that, with all the studied loads (50%, 75%, and 100%), particulate mass and NO(x) can both be reduced over 25% by engine parameter adjustments. Further, the emission reduction was even higher when the target for adjusting engine parameters was to exclusively reduce either particulates or NO(x). In addition to particulate mass, different indicators of particulate emissions were also compared. These indicators included filter smoke number (FSN), total particle number, total particle surface area, and geometric mean diameter of the emitted particle size distribution. As a result of this comparison, a linear correlation between FSN and total particulate surface area at low FSN region was found.

  7. Parameter identification in continuum models

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.

    1983-01-01

    Approximation techniques for use in numerical schemes for estimating spatially varying coefficients in continuum models such as those for Euler-Bernoulli beams are discussed. The techniques are based on quintic spline state approximations and cubic spline parameter approximations. Both theoretical and numerical results are presented. Previously announced in STAR as N83-28934

  8. Parameter identification in continuum models

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.

    1983-01-01

    Approximation techniques for use in numerical schemes for estimating spatially varying coefficients in continuum models such as those for Euler-Bernoulli beams are discussed. The techniques are based on quintic spline state approximations and cubic spline parameter approximations. Both theoretical and numerical results are presented.

  9. Parameter estimation for transformer modeling

    NASA Astrophysics Data System (ADS)

    Cho, Sung Don

    Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, lambda-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients

  10. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    ERIC Educational Resources Information Center

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  11. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  12. Initial experience in operation of furnace burners with adjustable flame parameters

    SciTech Connect

    Garzanov, A.L.; Dolmatov, V.L.; Saifullin, N.R.

    1995-07-01

    The designs of burners currently used in tube furnaces (CP, FGM, GMG, GIK, GNF, etc.) do not have any provision for adjusting the heat-transfer characteristics of the flame, since the gas and air feed systems in these burners do not allow any variation of the parameters of mixture formation, even though this process is critical in determining the length, shape, and luminosity of the flame and also the furnace operating conditions: efficiency, excess air coefficient, flue gas temperature at the bridgewall, and other indexes. In order to provide the controlling the heat-transfer characteristics of the flame, the Elektrogorsk Scientific-Research Center (ENITs), on the assignment of the Novo-Ufa Petroleum Refinery, developed a burner with diffusion regulation of the flame. The gas nozzle of the burner is made up of two coaxial gas chambers 1 and 2, with independent feed of gas from a common line through two supply lines.

  13. Illumination-parameter adjustable and illumination-distribution visible LED helmet for low-level light therapy on brain injury

    NASA Astrophysics Data System (ADS)

    Wang, Pengbo; Gao, Yuan; Chen, Xiao; Li, Ting

    2016-03-01

    Low-level light therapy (LLLT) has been clinically applied. Recently, more and more cases are reported with positive therapeutic effect by using transcranial light emitting diodes (LEDs) illumination. Here, we developed a LLLT helmet for treating brain injuries based on LED arrays. We designed the LED arrays in circle shape and assembled them in multilayered 3D printed helmet with water-cooling module. The LED arrays can be adjust to touch the head of subjects. A control circuit was developed to drive and control the illumination of the LLLT helmet. The software portion provides the control of on and off of each LED arrays, the setup of illumination parameters, and 3D distribution of LLLT light dose in human subject according to the illumination setups. This LLLT light dose distribution was computed by a Monte Carlo model for voxelized media and the Visible Chinese Human head dataset and displayed in 3D view at the background of head anatomical structure. The performance of the whole system was fully tested. One stroke patient was recruited in the preliminary LLLT experiment and the following neuropsychological testing showed obvious improvement in memory and executive functioning. This clinical case suggested the potential of this Illumination-parameter adjustable and illuminationdistribution visible LED helmet as a reliable, noninvasive, and effective tool in treating brain injuries.

  14. Catastrophe, Chaos, and Complexity Models and Psychosocial Adjustment to Disability.

    ERIC Educational Resources Information Center

    Parker, Randall M.; Schaller, James; Hansmann, Sandra

    2003-01-01

    Rehabilitation professionals may unknowingly rely on stereotypes and specious beliefs when dealing with people with disabilities, despite the formulation of theories that suggest new models of the adjustment process. Suggests that Catastrophe, Chaos, and Complexity Theories hold considerable promise in this regard. This article reviews these…

  15. Order Effects in Belief Updating: The Belief-Adjustment Model.

    ERIC Educational Resources Information Center

    Hogarth, Robin M.; Einhorn, Hillel J.

    1992-01-01

    A theory of the updating of beliefs over time is presented that explicitly accounts for order-effect phenomena as arising from the interaction of information-processing strategies and task characteristics. The belief-adjustment model is supported by 5 experiments involving 192 adult subjects. (SLD)

  16. Achieving high bit rate logical stochastic resonance in a bistable system by adjusting parameters

    NASA Astrophysics Data System (ADS)

    Yang, Ding-Xin; Gu, Feng-Shou; Feng, Guo-Jin; Yang, Yong-Min; Ball, Andrew

    2015-11-01

    The phenomenon of logical stochastic resonance (LSR) in a nonlinear bistable system is demonstrated by numerical simulations and experiments. However, the bit rates of the logical signals are relatively low and not suitable for practical applications. First, we examine the responses of the bistable system with fixed parameters to different bit rate logic input signals, showing that an arbitrary high bit rate LSR in a bistable system cannot be achieved. Then, a normalized transform of the LSR bistable system is introduced through a kind of variable substitution. Based on the transform, it is found that LSR for arbitrary high bit rate logic signals in a bistable system can be achieved by adjusting the parameters of the system, setting bias value and amplifying the amplitudes of logic input signals and noise properly. Finally, the desired OR and AND logic outputs to high bit rate logic inputs in a bistable system are obtained by numerical simulations. The study might provide higher feasibility of LSR in practical engineering applications. Project supported by the National Natural Science Foundation of China (Grant No. 51379526).

  17. Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.

    PubMed

    Glöckner, Andreas; Pachur, Thorsten

    2012-04-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice.

  18. Adjustment in Mothers of Children with Asperger Syndrome: An Application of the Double ABCX Model of Family Adjustment

    ERIC Educational Resources Information Center

    Pakenham, Kenneth I.; Samios, Christina; Sofronoff, Kate

    2005-01-01

    The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between…

  19. Age of dam and sex of calf adjustments and genetic parameters for gestation length in Charolais cattle.

    PubMed

    Crews, D H

    2006-01-01

    To estimate adjustment factors and genetic parameters for gestation length (GES), AI and calving date records (n = 40,356) were extracted from the Canadian Charolais Association field database. The average time from AI to calving date was 285.2 d (SD = 4.49 d) and ranged from 274 to 296 d. Fixed effects were sex of calf, age of dam (2, 3, 4, 5 to 10, > or = 11 yr), and gestation contemporary group (year of birth x herd of origin). Variance components were estimated using REML and 4 animal models (n = 84,332) containing from 0 to 3 random maternal effects. Model 1 (M1) contained only direct genetic effects. Model 2 (M2) was G1 plus maternal genetic effects with the direct x maternal genetic covariance constrained to zero, and model 3 (M3) was G2 without the covariance constraint. Model 4 (M4) extended G3 to include a random maternal permanent environmental effect. Direct heritability estimates were high and similar among all models (0.61 to 0.64), and maternal heritability estimates were low, ranging from 0.01 (M2) to 0.09 (M3). Likelihood ratio tests and parameter estimates suggested that M4 was the most appropriate (P < 0.05) model. With M4, phenotypic variance (18.35 d2) was partitioned into direct and maternal genetic, and maternal permanent environmental components (hd2 = 0.64 +/- 0.04, hm2 = 0.07 +/- 0.01, r(d,m) = -0.37 +/- 0.06, and c2 = 0.03 +/- 0.01, respectively). Linear contrasts were used to estimate that bull calves gestated 1.26 d longer (P < 0.02) than heifers, and adjustments to a mature equivalent (5 to 10 yr old) age of dam were 1.49 (P < 0.01), 0.56 (P < 0.01), 0.33 (P < 0.01), and -0.24 (P < 0.14) d for GES records of calves born to 2-, 3-, 4-, and > or = 11-yr-old cows, respectively. Bivariate animal models were used to estimate genetic parameters for GES with birth and adjusted 205-d weaning weights, and postweaning gain. Direct GES was positively correlated with direct birth weight (BWT; 0.34 +/- 0.04) but negatively correlated with maternal

  20. Use of generalised Procrustes analysis for the photogrammetric block adjustment by independent models

    NASA Astrophysics Data System (ADS)

    Crosilla, Fabio; Beinat, Alberto

    The paper reviews at first some aspects of the generalised Procrustes analysis (GP) and outlines the analogies with the block adjustment by independent models. On this basis, an innovative solution of the block adjustment problem by Procrustes algorithms and the related computer program implementation are presented and discussed. The main advantage of the new proposed method is that it avoids the conventional least squares solution. For this reason, linearisation algorithms and the knowledge of a priori approximate values for the unknown parameters are not required. Once the model coordinates of the tie points are available and at least three control points are known, the Procrustes algorithms can directly provide, without further information, the tie point ground coordinates and the exterior orientation parameters. Furthermore, some numerical block adjustment solutions obtained by the new method in different areas of North Italy are compared to the conventional solution. The very simple data input process, the less memory requirements, the low computing time and the same level of accuracy that characterise the new algorithm with respect to a conventional one are verified with these tests. A block adjustment of 11 models, with 44 tie points and 14 control points, takes just a few seconds on an Intel PIII 400 MHz computer, and the total data memory required is less than twice the allocated space for the input data. This is because most of the computations are carried out on data matrices of limited size, typically 3×3.

  1. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data. PMID:24363476

  2. Seamless continental-domain hydrologic model parameter estimations with Multi-Scale Parameter Regionalization

    NASA Astrophysics Data System (ADS)

    Mizukami, Naoki; Clark, Martyn; Newman, Andrew; Wood, Andy

    2016-04-01

    Estimation of spatially distributed parameters is one of the biggest challenges in hydrologic modeling over a large spatial domain. This problem arises from methodological challenges such as the transfer of calibrated parameters to ungauged locations. Consequently, many current large scale hydrologic assessments rely on spatially inconsistent parameter fields showing patchwork patterns resulting from individual basin calibration or spatially constant parameters resulting from the adoption of default or a-priori estimates. In this study we apply the Multi-scale Parameter Regionalization (MPR) framework (Samaniego et al., 2010) to generate spatially continuous and optimized parameter fields for the Variable Infiltration Capacity (VIC) model over the contiguous United States(CONUS). The MPR method uses transfer functions that relate geophysical attributes (e.g., soil) to model parameters (e.g., parameters that describe the storage and transmission of water) at the native resolution of the geophysical attribute data and then scale to the model spatial resolution with several scaling functions, e.g., arithmetic mean, harmonic mean, and geometric mean. Model parameter adjustments are made by calibrating the parameters of the transfer function rather than the model parameters themselves. In this presentation, we first discuss conceptual challenges in a "model agnostic" continental-domain application of the MPR approach. We describe development of transfer functions for the soil parameters, and discuss challenges associated with extending MPR for VIC to multiple models. Next, we discuss the "computational shortcut" of headwater basin calibration where we estimate the parameters for only 500 headwater basins rather than conducting simulations for every grid box across the entire domain. We first performed individual basin calibration to obtain a benchmark of the maximum achievable performance in each basin, and examined their transferability to the other basins. We then

  3. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion.

    PubMed

    Huang, Lam O; Infante-Rivard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies. PMID:27630667

  4. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion

    PubMed Central

    Huang, Lam O.; Infante-Rivard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies.

  5. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion

    PubMed Central

    Huang, Lam O.; Infante-Rivard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies. PMID:27630667

  6. Understanding Parameter Invariance in Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Rupp, Andre A.; Zumbo, Bruno D.

    2006-01-01

    One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…

  7. Model parameter updating using Bayesian networks

    SciTech Connect

    Treml, C. A.; Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  8. Global Model Analysis by Parameter Space Partitioning

    ERIC Educational Resources Information Center

    Pitt, Mark A.; Kim, Woojae; Navarro, Daniel J.; Myung, Jay I.

    2006-01-01

    To model behavior, scientists need to know how models behave. This means learning what other behaviors a model can produce besides the one generated by participants in an experiment. This is a difficult problem because of the complexity of psychological models (e.g., their many parameters) and because the behavioral precision of models (e.g.,…

  9. Adjustable box-wing model for solar radiation pressure impacting GPS satellites

    NASA Astrophysics Data System (ADS)

    Rodriguez-Solano, C. J.; Hugentobler, U.; Steigenberger, P.

    2012-04-01

    One of the major uncertainty sources affecting Global Positioning System (GPS) satellite orbits is the direct solar radiation pressure. In this paper a new model for the solar radiation pressure on GPS satellites is presented that is based on a box-wing satellite model, and assumes nominal attitude. The box-wing model is based on the physical interaction between solar radiation and satellite surfaces, and can be adjusted to fit the GPS tracking data. To compensate the effects of solar radiation pressure, the International GNSS Service (IGS) analysis centers employ a variety of approaches, ranging from purely empirical models based on in-orbit behavior, to physical models based on pre-launch spacecraft structural analysis. It has been demonstrated, however, that the physical models fail to predict the real orbit behavior with sufficient accuracy, mainly due to deviations from nominal attitude, inaccurately known optical properties, or aging of the satellite surfaces. The adjustable box-wing model presented in this paper is an intermediate approach between the physical/analytical models and the empirical models. The box-wing model fits the tracking data by adjusting mainly the optical properties of the satellite's surfaces. In addition, the so called Y-bias and a parameter related to a rotation lag angle of the solar panels around their rotation axis (about 1.5° for Block II/IIA and 0.5° for Block IIR) are estimated. This last parameter, not previously identified for GPS satellites, is a key factor for precise orbit determination. For this study GPS orbits are generated based on one year (2007) of tracking data, with the processing scheme derived from the Center for Orbit Determination in Europe (CODE). Two solutions are computed, one using the adjustable box-wing model and one using the CODE empirical model. Using this year of data the estimated parameters and orbits are analyzed. The performance of the models is comparable, when looking at orbit overlap and orbit

  10. Adjusting exposure limits for long and short exposure periods using a physiological pharmacokinetic model

    SciTech Connect

    Andersen, M.E.; MacNaughton, M.G.; Clewell, H.J. III; Paustenbach, D.J.

    1987-04-01

    This paper advocates use of a physiologically-based pharmacokinetic (PB-PK) model for determining adjustment factors for unusual exposure schedules. The PB-PK model requires data on the blood:air and tissue:blood partition coefficients, the rate of metabolism of the chemical, organ volumes, organ blood flows and ventilation rates in humans. Laboratory data on two industrially important chemicals - styrene and methylene chloride - were used to illustrate the PB-PK approach. At inhaled concentrations near their respective 8-hr Threshold Limit Value - Time-weighted averages both of these chemicals are primarily eliminated from the body by metabolism. For these two chemicals, the appropriate risk indexing parameters are integrated tissue dose or total amount of parent chemical metabolized. These examples also illustrate how the model can be used to calculate risk based on various other measures of delivered dose. For the majority of volatile chemicals, the parameter most closely associated with risk is the integrated tissue dose. This analysis suggests that when pharmacokinetic data are not available, a simple inverse formula may be sufficient for adjustment in most instances and application of complex kinetic models unnecessary. At present, this PB-PK approach is recommended only for exposure periods of 4 to 16 hr/day, because the mechanisms of toxicity for some chemicals may vary for very short- or very long-term exposures. For these altered schedules, more biological information on recovery in rest periods and changing mechanisms of toxicity are necessary before any adjustment is attempted.

  11. On Interpreting the Model Parameters for the Three Parameter Logistic Model

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    This paper addresses two problems relating to the interpretability of the model parameters in the three parameter logistic model. First, it is shown that if the values of the discrimination parameters are all the same, the remaining parameters are nonidentifiable in a nontrivial way that involves not only ability and item difficulty, but also the…

  12. Multi-objective parameter optimization of common land model using adaptive surrogate modeling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2015-05-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20 to 100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) Regional LSMs are expensive to run, while conventional multi-objective optimization methods need a large number of model runs (typically ~105-106). It makes parameter optimization computationally prohibitive. An uncertainty quantification framework was developed to meet the aforementioned challenges, which include the following steps: (1) using parameter screening to reduce the number of adjustable parameters, (2) using surrogate models to emulate the responses of dynamic models to the variation of adjustable parameters, (3) using an adaptive strategy to improve the efficiency of surrogate modeling-based optimization; (4) using a weighting function to transfer multi-objective optimization to single-objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column application of a LSM - the Common Land Model (CoLM), and evaluate the effectiveness and efficiency of the proposed framework. The result indicate that this framework can efficiently achieve optimal parameters in a more effective way. Moreover, this result implies the possibility of calibrating other large complex dynamic models, such as regional-scale LSMs, atmospheric models and climate models.

  13. Measurement of the angular-motion parameters of a base by a dynamically adjustable gyroscope

    NASA Astrophysics Data System (ADS)

    Zbrutskii, A. V.

    1986-04-01

    The paper examines the dynamics and errors of a balanced dynamically adjustable gyroscope as a sensor of the angular deviations and angular velocities of the base. Attention is given to measurements made under conditions of uniform and uniformly accelerated rotation of the base.

  14. Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment

    NASA Astrophysics Data System (ADS)

    Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.

    2016-08-01

    This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.

  15. Mispricing in the medicare advantage risk adjustment model.

    PubMed

    Chen, Jing; Ellis, Randall P; Toro, Katherine H; Ash, Arlene S

    2015-01-01

    The Centers for Medicare and Medicaid Services (CMS) implemented hierarchical condition category (HCC) models in 2004 to adjust payments to Medicare Advantage (MA) plans to reflect enrollees' expected health care costs. We use Verisk Health's diagnostic cost group (DxCG) Medicare models, refined "descendants" of the same HCC framework with 189 comprehensive clinical categories available to CMS in 2004, to reveal 2 mispricing errors resulting from CMS' implementation. One comes from ignoring all diagnostic information for "new enrollees" (those with less than 12 months of prior claims). Another comes from continuing to use the simplified models that were originally adopted in response to assertions from some capitated health plans that submitting the claims-like data that facilitate richer models was too burdensome. Even the main CMS model being used in 2014 recognizes only 79 condition categories, excluding many diagnoses and merging conditions with somewhat heterogeneous costs. Omitted conditions are typically lower cost or "vague" and not easily audited from simplified data submissions. In contrast, DxCG Medicare models use a comprehensive, 394-HCC classification system. Applying both models to Medicare's 2010-2011 fee-for-service 5% sample, we find mispricing and lower predictive accuracy for the CMS implementation. For example, in 2010, 13% of beneficiaries had at least 1 higher cost DxCG-recognized condition but no CMS-recognized condition; their 2011 actual costs averaged US$6628, almost one-third more than the CMS model prediction. As MA plans must now supply encounter data, CMS should consider using more refined and comprehensive (DxCG-like) models.

  16. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  17. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes. PMID:25040235

  18. Dose adjustment strategy of cyclosporine A in renal transplant patients: evaluation of anthropometric parameters for dose adjustment and C0 vs. C2 monitoring in Japan, 2001-2010.

    PubMed

    Kokuhu, Takatoshi; Fukushima, Keizo; Ushigome, Hidetaka; Yoshimura, Norio; Sugioka, Nobuyuki

    2013-01-01

    The optimal use and monitoring of cyclosporine A (CyA) have remained unclear and the current strategy of CyA treatment requires frequent dose adjustment following an empirical initial dosage adjusted for total body weight (TBW). The primary aim of this study was to evaluate age and anthropometric parameters as predictors for dose adjustment of CyA; and the secondary aim was to compare the usefulness of the concentration at predose (C0) and 2-hour postdose (C2) monitoring. An open-label, non-randomized, retrospective study was performed in 81 renal transplant patients in Japan during 2001-2010. The relationships between the area under the blood concentration-time curve (AUC0-9) of CyA and its C0 or C2 level were assessed with a linear regression analysis model. In addition to age, 7 anthropometric parameters were tested as predictors for AUC0-9 of CyA: TBW, height (HT), body mass index (BMI), body surface area (BSA), ideal body weight (IBW), lean body weight (LBW), and fat free mass (FFM). Correlations between AUC0-9 of CyA and these parameters were also analyzed with a linear regression model. The rank order of the correlation coefficient was C0 > C2 (C0; r=0.6273, C2; r=0.5562). The linear regression analyses between AUC0-9 of CyA and candidate parameters indicated their potential usefulness from the following rank order: IBW > FFM > HT > BSA > LBW > TBW > BMI > Age. In conclusion, after oral administration, C2 monitoring has a large variation and could be at high risk for overdosing. Therefore, after oral dosing of CyA, it was not considered to be a useful approach for single monitoring, but should rather be used with C0 monitoring. The regression analyses between AUC0-9 of CyA and anthropometric parameters indicated that IBW was potentially the superior predictor for dose adjustment of CyA in an empiric strategy using TBW (IBW; r=0.5181, TBW; r=0.3192); however, this finding seems to lack the pharmacokinetic rationale and thus warrants further basic and clinical

  19. Towards accurate observation and modelling of Antarctic glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    King, M.

    2012-04-01

    The response of the solid Earth to glacial mass changes, known as glacial isostatic adjustment (GIA), has received renewed attention in the recent decade thanks to the Gravity Recovery and Climate Experiment (GRACE) satellite mission. GRACE measures Earth's gravity field every 30 days, but cannot partition surface mass changes, such as present-day cryospheric or hydrological change, from changes within the solid Earth, notably due to GIA. If GIA cannot be accurately modelled in a particular region the accuracy of GRACE estimates of ice mass balance for that region is compromised. This lecture will focus on Antarctica, where models of GIA are hugely uncertain due to weak constraints on ice loading history and Earth structure. Over the last years, however, there has been a step-change in our ability to measure GIA uplift with the Global Positioning System (GPS), including widespread deployments of permanent GPS receivers as part of the International Polar Year (IPY) POLENET project. I will particularly focus on the Antarctic GPS velocity field and the confounding effect of elastic rebound due to present-day ice mass changes, and then describe the construction and calibration of a new Antarctic GIA model for application to GRACE data, as well as highlighting areas where further critical developments are required.

  20. Models and parameters for environmental radiological assessments

    SciTech Connect

    Miller, C W

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  1. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    ERIC Educational Resources Information Center

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  2. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  3. Data registration without explicit correspondence for adjustment of camera orientation parameter estimation

    NASA Astrophysics Data System (ADS)

    Barsai, Gabor

    provides a path to fuse data from lidar, GIS and digital multispectral images and reconstructing the precise 3-D scene model, without human intervention, regardless of the type of data or features in the data. The data are initially registered to each other using GPS/INS initial positional values, then conjugate features are found in the datasets to refine the registration. The novelty of the research is that no conjugate points are necessary in the various datasets, and registration is performed without human intervention. The proposed system uses the original lidar and GIS data and finds edges of buildings with the help of the digital images, utilizing the exterior orientation parameters to project the lidar points onto the edge extracted image/map. These edge points are then utilized to orient and locate the datasets, in a correct position with respect to each other.

  4. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    NASA Astrophysics Data System (ADS)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  5. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

    PubMed

    Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

    2016-08-15

    Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26892025

  6. Adjusting the Census of 1990: The Smoothing Model.

    ERIC Educational Resources Information Center

    Freedman, David A.; And Others

    1993-01-01

    Techniques for adjusting census figures are discussed, with a focus on sampling error, uncertainty of estimates resulting from the luck of sample choice. Computer simulations illustrate the ways in which the smoothing algorithm may make adjustments less, rather than more, accurate. (SLD)

  7. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model

    PubMed Central

    Pande, Vijay S.; Head-Gordon, Teresa; Ponder, Jay W.

    2016-01-01

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. The protocol uses an automated procedure, ForceBalance, to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimentally obtained data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The new AMOEBA14 water model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures ranging from 249 K to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to a variety of experimental properties as a function of temperature, including the 2nd virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient and dielectric constant. The viscosity, self-diffusion constant and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2 to 20 water molecules, the AMOEBA14 model yields results similar to the AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601

  8. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    PubMed

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  9. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard

    PubMed Central

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase “RHINE WAAL UNIVERSITY” with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy). PMID:26733788

  10. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard.

    PubMed

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase "RHINE WAAL UNIVERSITY" with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy).

  11. Radar adjusted data versus modelled precipitation: a case study over Cyprus

    NASA Astrophysics Data System (ADS)

    Casaioli, M.; Mariani, S.; Accadia, C.; Gabella, M.; Michaelides, S.; Speranza, A.; Tartaglione, N.

    2006-01-01

    In the framework of the European VOLTAIRE project (Fifth Framework Programme), simulations of relatively heavy precipitation events, which occurred over the island of Cyprus, by means of numerical atmospheric models were performed. One of the aims of the project was indeed the comparison of modelled rainfall fields with multi-sensor observations. Thus, for the 5 March 2003 event, the 24-h accumulated precipitation BOlogna Limited Area Model (BOLAM) forecast was compared with the available observations reconstructed from ground-based radar data and estimated by rain gauge data. Since radar data may be affected by errors depending on the distance from the radar, these data could be range-adjusted by using other sensors. In this case, the Precipitation Radar aboard the Tropical Rainfall Measuring Mission (TRMM) satellite was used to adjust the ground-based radar data with a two-parameter scheme. Thus, in this work, two observational fields were employed: the rain gauge gridded analysis and the observational analysis obtained by merging the range-adjusted radar and rain gauge fields. In order to verify the modelled precipitation, both non-parametric skill scores and the contiguous rain area (CRA) analysis were applied. Skill score results show some differences when using the two observational fields. CRA results are instead quite in agreement, showing that in general a 0.27° eastward shift optimizes the forecast with respect to the two observational analyses. This result is also supported by a subjective inspection of the shifted forecast field, whose gross features agree with the analysis pattern more than the non-shifted forecast one. However, some open questions, especially regarding the effect of other range adjustment techniques, remain open and need to be addressed in future works.

  12. Analysis of Modeling Parameters on Threaded Screws.

    SciTech Connect

    Vigil, Miquela S.; Brake, Matthew Robert; Vangoethem, Douglas

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.

  13. Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in Simmental.

    PubMed

    Lidauer, M H; Emmerling, R; Mäntysaari, E A

    2008-06-01

    A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region x year x month x parity effect and a random herd x test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking.

  14. Dolphins adjust species-specific frequency parameters to compensate for increasing background noise.

    PubMed

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles' frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise.

  15. Dolphins adjust species-specific frequency parameters to compensate for increasing background noise.

    PubMed

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles' frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise. PMID:25853825

  16. Dolphins Adjust Species-Specific Frequency Parameters to Compensate for Increasing Background Noise

    PubMed Central

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles’ frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise. PMID:25853825

  17. Comparison of Inorganic Carbon System Parameters Measured in the Atlantic Ocean from 1990 to 1998 and Recommended Adjustments

    SciTech Connect

    Wanninkhof, R.

    2003-05-21

    As part of the global synthesis effort sponsored by the Global Carbon Cycle project of the National Oceanic and Atmospheric Administration (NOAA) and U.S. Department of Energy, a comprehensive comparison was performed of inorganic carbon parameters measured on oceanographic surveys carried out under auspices of the Joint Global Ocean Flux Study and related programs. Many of the cruises were performed as part of the World Hydrographic Program of the World Ocean Circulation Experiment and the NOAA Ocean-Atmosphere Carbon Exchange Study. Total dissolved inorganic carbon (DIC), total alkalinity (TAlk), fugacity of CO{sub 2}, and pH data from twenty-three cruises were checked to determine whether there were systematic offsets of these parameters between cruises. The focus was on the DIC and TAlk state variables. Data quality and offsets of DIC and TAlk were determined by using several different techniques. One approach was based on crossover analyses, where the deep-water concentrations of DIC and TAlk were compared for stations on different cruises that were within 100 km of each other. Regional comparisons were also made by using a multiple-parameter linear regression technique in which DIC or TAlk was regressed against hydrographic and nutrient parameters. When offsets of greater than 4 {micro}mol/kg were observed for DIC and/or 6 {micro}mol/kg were observed for TAlk, the data taken on the cruise were closely scrutinized to determine whether the offsets were systematic. Based on these analyses, the DIC data and TAlk data of three cruises were deemed of insufficient quality to be included in the comprehensive basinwide data set. For several of the cruises, small adjustments in TAlk were recommended for consistency with other cruises in the region. After these adjustments were incorporated, the inorganic carbon data from all cruises along with hydrographic, chlorofluorocarbon, and nutrient data were combined as a research quality product for the scientific community.

  18. Bioelectrical impedance modelling of gentamicin pharmacokinetic parameters.

    PubMed

    Zarowitz, B J; Pilla, A M; Peterson, E L

    1989-10-01

    1. Bioelectrical impedance analysis was used to develop descriptive models of gentamicin pharmacokinetic parameters in 30 adult in-patients receiving therapy with gentamicin. 2. Serial blood samples obtained from each subject at steady state were analyzed and used to derive gentamicin pharmacokinetic parameters. 3. Multiple regression equations were developed for clearance, elimination rate constant and volume of distribution at steady state and were all statistically significant at P less than 0.05. 4. Clinical validation of this innovative technique is warranted before clinical use is recommended.

  19. EMG/ECG Acquisition System with Online Adjustable Parameters Using ZigBee Wireless Technology

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroyuki

    This paper deals with a novel wireless bio-signal acquisition system employing ZigBee wireless technology, which consists of mainly two components, that is, intelligent electrode and data acquisition host. The former is the main topic of this paper. It is put on a subject's body to amplify bio-signal such as EMG or ECG and stream its data at upto 2 ksps. One of the most remarkable feature of the intelligent electrode is that it can change its own parameters including both digital and analog ones on-line. The author describes its design first, then introduces a small, light and low cost implementation of the intelligent electrode named as “VAMPIRE-BAT.” And he show some experimental results to confirm its usability and to estimate its practical performances.

  20. Testing Linear Models for Ability Parameters in Item Response Models

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; Hendrawan, Irene

    2005-01-01

    Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum likelihood framework. They are explicitly formulated…

  1. Genetic Parameters of Pre-adjusted Body Weight Growth and Ultrasound Measures of Body Tissue Development in Three Seedstock Pig Breed Populations in Korea

    PubMed Central

    Choy, Yun Ho; Mahboob, Alam; Cho, Chung Il; Choi, Jae Gwan; Choi, Im Soo; Choi, Tae Jeong; Cho, Kwang Hyun; Park, Byoung Ho

    2015-01-01

    The objective of this study was to compare the effects of body weight growth adjustment methods on genetic parameters of body growth and tissue among three pig breeds. Data collected on 101,820 Landrace, 281,411 Yorkshire, and 78,068 Duroc pigs, born in Korean swine breeder farms since 2000, were analyzed. Records included body weights on test day and amplitude (A)-mode ultrasound carcass measures of backfat thickness (BF), eye muscle area (EMA), and retail cut percentage (RCP). Days to 90 kg body weight (DAYS90), through an adjustment of the age based on the body weight at the test day, were obtained. Ultrasound measures were also pre-adjusted (ABF, EMA, AEMA, ARCP) based on their test day measures. The (co)variance components were obtained with 3 multi-trait animal models using the REMLF90 software package. Model I included DAYS90 and ultrasound traits, whereas model II and III accounted DAYS90 and pre-adjusted ultrasound traits. Fixed factors were sex (sex) and contemporary groups (herd-year-month of birth) for all traits among the models. Additionally, model I and II considered a linear covariate of final weight on the ultrasound measure traits. Heritability (h2) estimates for DAYS90, BF, EMA, and RCP ranged from 0.36 to 0.42, 0.34 to 0.43, 0.20 to 0.22, and 0.39 to 0.45, respectively, among the models. The h2 estimates of DAYS90 from model II and III were also somewhat similar. The h2 for ABF, AEMA, and ARCP were 0.35 to 0.44, 0.20 to 0.25, and 0.41 to 0.46, respectively. Our heritability estimates varied mostly among the breeds. The genetic correlations (rG) were moderately negative between DAYS90 and BF (−0.29 to −0.38), and between DAYS90 and EMA (−0.16 to −0.26). BF had strong rG with RCP (−0.87 to −0.93). Moderately positive rG existed between DAYS90 and RCP (0.20 to 0.28) and between EMA and RCP (0.35 to 0.44) among the breeds. For DAYS90, model II and III, its correlations with ABF, AEMA, and ARCP were mostly low or negligible except the r

  2. Modelling affect in terms of speech parameters.

    PubMed

    Stassen, H H

    1988-01-01

    It is well known that the human voice contains important information about the affective state of a speaker at a nonverbal level. Accordingly, we started an extensive investigation which aims at modelling intraindividual changes of the global affective state over time, as this state is reflected by the human voice, and can be inferred from measurable speech parameters. For the purpose of this investigation, a speech-recording procedure was designed which is especially suited to reveal intraindividual changes of voice patterns over time since each person serves as his or her own reference. On the other hand, the chosen experimental setup is less suited to classify patients in the sense of a traditional diagnostic scheme. In order to find an appropriate mathematical model on the basis of speech parameters, a calibration study with 190 healthy subjects was carried out which enabled us to investigate each parameter for its reproducibility, sensitivity and specificity. In particular, this calibration study yielded the information of how to draw the line between 'normal' fluctuations and 'significant' intraindividual changes over time. All speech parameters under discussion turned out to be sufficiently stable over time, whereas, in regard to their sensitivity to form and content of text, significant differences showed up. In a second step, a pilot study with 6 depressive patients was carried out in order to investigate the specificity of voice parameters with regard to psychopathology. It turned out that the registration procedure is realizable even if patients are considerably handicapped by their illness. However, no consistent correlations could be revealed between single speech parameters and psychopathological rating scales.(ABSTRACT TRUNCATED AT 250 WORDS)

  3. Hematologic parameters in the adjustment of chemotherapy doses in combined modality treatments involving radiation

    SciTech Connect

    Byfield, J.E.

    1984-08-01

    The differential white blood cell count of a group of patients with Stages I and II infiltrating ductal carcinoma who underwent treatment in the preadjuvant chemotherapy era have been evaluated. All patients received a modified radical mastectomy followed by postoperative radiation therapy to the chest wall and draining regional lymph node chains (ipsilateral internal mammary, axillary,and supraclavicular regions). When the levels of circulating neutrophils, band cells, and lymphocytes were compared for the period beginning prior to surgery and ending 1 year after the completion of radiotherapy, it was found that radiation induced a significant lymphopenia. However, all patients maintained a neutrophil count at least twice that needed for full-dose conventional chemotherapy. Based on these observations and related preclinical and clinical information, it is proposed that future clinical trials utilizing even local radiotherapy as a component of therapy must have their chemotherapy doses based on appropriate hematologic parameters (neutrophil + band count) in order to avoid spurious and quite possibly erroneous results.

  4. Modelling spin Hamiltonian parameters of molecular nanomagnets.

    PubMed

    Gupta, Tulika; Rajaraman, Gopalan

    2016-07-12

    Molecular nanomagnets encompass a wide range of coordination complexes possessing several potential applications. A formidable challenge in realizing these potential applications lies in controlling the magnetic properties of these clusters. Microscopic spin Hamiltonian (SH) parameters describe the magnetic properties of these clusters, and viable ways to control these SH parameters are highly desirable. Computational tools play a proactive role in this area, where SH parameters such as isotropic exchange interaction (J), anisotropic exchange interaction (Jx, Jy, Jz), double exchange interaction (B), zero-field splitting parameters (D, E) and g-tensors can be computed reliably using X-ray structures. In this feature article, we have attempted to provide a holistic view of the modelling of these SH parameters of molecular magnets. The determination of J includes various class of molecules, from di- and polynuclear Mn complexes to the {3d-Gd}, {Gd-Gd} and {Gd-2p} class of complexes. The estimation of anisotropic exchange coupling includes the exchange between an isotropic metal ion and an orbitally degenerate 3d/4d/5d metal ion. The double-exchange section contains some illustrative examples of mixed valance systems, and the section on the estimation of zfs parameters covers some mononuclear transition metal complexes possessing very large axial zfs parameters. The section on the computation of g-anisotropy exclusively covers studies on mononuclear Dy(III) and Er(III) single-ion magnets. The examples depicted in this article clearly illustrate that computational tools not only aid in interpreting and rationalizing the observed magnetic properties but possess the potential to predict new generation MNMs. PMID:27366794

  5. Constant-parameter capture-recapture models

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  6. Data registration without explicit correspondence for adjustment of camera orientation parameter estimation

    NASA Astrophysics Data System (ADS)

    Barsai, Gabor

    Creating accurate, current digital maps and 3-D scenes is a high priority in today's fast changing environment. The nation's maps are in a constant state of revision, with many alterations or new additions each day. Digital maps have become quite common. Google maps, Mapquest and others are examples. These also have 3-D viewing capability. Many details are now included, such as the height of low bridges, in the attribute data for the objects displayed on digital maps and scenes. To expedite the updating of these datasets, they should be created autonomously, without human intervention, from data streams. Though systems exist that attain fast, or even real-time performance mapping and reconstruction, they are typically restricted to creating sketches from the data stream, and not accurate maps or scenes. The ever increasing amount of image data available from private companies, governments and the internet, suggest the development of an automated system is of utmost importance. The proposed framework can create 3-D views autonomously; which extends the functionality of digital mapping. The first step to creating 3-D views is to reconstruct the scene of the area to be mapped. To reconstruct a scene from heterogeneous sources, the data has to be registered: either to each other or, preferably, to a general, absolute coordinate system. Registering an image is based on the reconstruction of the geometric relationship of the image to the coordinate system at the time of imaging. Registration is the process of determining the geometric transformation parameters of a dataset in one coordinate system, the source, with respect to the other coordinate system, the target. The advantages of fusing these datasets by registration manifests itself by the data contained in the complementary information that different modality datasets have. The complementary characteristics of these systems can be fully utilized only after successful registration of the photogrammetric and

  7. Laser-plasma SXR/EUV sources: adjustment of radiation parameters for specific applications

    NASA Astrophysics Data System (ADS)

    Bartnik, A.; Fiedorowicz, H.; Fok, T.; Jarocki, R.; Kostecki, J.; Szczurek, A.; Szczurek, M.; Wachulak, P.; Wegrzyński, Ł.

    2014-12-01

    In this work soft X-ray (SXR) and extreme ultraviolet (EUV) laser-produced plasma (LPP) sources employing Nd:YAG laser systems of different parameters are presented. First of them is a 10-Hz EUV source, based on a double-stream gaspuff target, irradiated with the 3-ns/0.8J laser pulse. In the second one a 10 ns/10 J/10 Hz laser system is employed and the third one utilizes the laser system with the pulse shorten to approximately 1 ns. Using various gases in the gas puff targets it is possible to obtain intense radiation in different wavelength ranges. This way intense continuous radiation in a wide spectral range as well as quasi-monochromatic radiation was produced. To obtain high EUV or SXR fluence the radiation was focused using three types of grazing incidence collectors and a multilayer Mo/Si collector. First of them is a multfoil gold plated collector consisted of two orthogonal stacks of ellipsoidal mirrors forming a double-focusing device. The second one is the ellipsoidal collector being part of the axisymmetrical ellipsoidal surface. Third of the collectors is composed of two aligned axisymmetrical paraboloidal mirrors optimized for focusing of SXR radiation. The last collector is an off-axis ellipsoidal multilayer Mo/Si mirror allowing for efficient focusing of the radiation in the spectral region centered at λ = 13.5 ± 0.5 nm. In this paper spectra of unaltered EUV or SXR radiation produced in different LPP source configurations together with spectra and fluence values of focused radiation are presented. Specific configurations of the sources were assigned to various applications.

  8. Kane model parameters and stochastic spin current

    NASA Astrophysics Data System (ADS)

    Chowdhury, Debashree

    2015-11-01

    The spin current and spin conductivity is computed through thermally driven stochastic process. By evaluating the Kramers equation and with the help of k → . p → method we have studied the spin Hall scenario. Due to the thermal assistance, the Kane model parameters get modified, which consequently modulate the spin orbit coupling (SOC). This modified SOC causes the spin current to change in a finite amount.

  9. Positive Psychology in the Personal Adjustment Course: A Salutogenic Model.

    ERIC Educational Resources Information Center

    Hymel, Glenn M.; Etherton, Joseph L.

    This paper proposes embedding various positive psychology themes in the context of an undergraduate course on the psychology of personal adjustment. The specific positive psychology constructs considered include those of hope, optimism, perseverance, humility, forgiveness, and spirituality. These themes are related to appropriate course content…

  10. Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools

    NASA Astrophysics Data System (ADS)

    Kollo, Karin; Spada, Giorgio; Vermeer, Martin

    2013-04-01

    Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every

  11. Principal Component Analysis of breast DCE-MRI Adjusted with a Model Based Method

    PubMed Central

    Eyal, Erez.; Badikhi, Daria; Furman-Haran, Edna; Kelcz, Fredrick; Kirshenbaum, Kevin J.; Degani, Hadassa

    2010-01-01

    Purpose To investigate a fast, objective and standardized method for analyzing breast DCE-MRI applying principal component analysis (PCA) adjusted with a model based method. Materials and Methods 3D gradient-echo dynamic contrast-enhanced breast images of 31 malignant and 38 benign lesions, recorded on a 1.5 Tesla scanner were retrospectively analyzed by PCA and by the model based three-time-point (3TP) method. Results Intensity scaled (IS) and enhancement scaled (ES) datasets were reduced by PCA yielding a 1st IS-eigenvector that captured the signal variation between fat and fibroglandular tissue; two IS-eigenvectors and the two first ES-eigenvectors that captured contrast-enhanced changes, whereas the remaining eigenvectors captured predominantly noise changes. Rotation of the two contrast related eigenvectors led to a high congruence between the projection coefficients and the 3TP parameters. The ES-eigenvectors and the rotation angle were highly reproducible across malignant lesions enabling calculation of a general rotated eigenvector base. ROC curve analysis of the projection coefficients of the two eigenvectors indicated high sensitivity of the 1st rotated eigenvector to detect lesions (AUC>0.97) and of the 2nd rotated eigenvector to differentiate malignancy from benignancy (AUC=0.87). Conclusion PCA adjusted with a model-based method provided a fast and objective computer-aided diagnostic tool for breast DCE-MRI. PMID:19856419

  12. A whale better adjusts the biosonar to ordered rather than to random changes in the echo parameters.

    PubMed

    Supin, Alexander Ya; Nachtigall, Paul E; Breese, Marlee

    2012-09-01

    A false killer whale's (Pseudorca crassidens) sonar clicks and auditory evoked potentials (AEPs) were recorded during echolocation with simulated echoes in two series of experiments. In the first, both the echo delay and transfer factor (which is the dB-ratio of the echo sound-pressure level to emitted pulse source level) were varied randomly from trial to trial until enough data were collected (random presentation). In the second, a combination of the echo delay and transfer factor was kept constant until enough data were collected (ordered presentation). The mean click level decreased with shortening the delay and increasing the transfer factor, more at the ordered presentation rather than at the random presentation. AEPs to the self-heard emitted clicks decreased with shortening the delay and increasing the echo level equally in both series. AEPs to echoes increased with increasing the echo level, little dependent on the echo delay at random presentations but much more dependent on delay with ordered presentations. So some adjustment of the whale's biosonar was possible without prior information about the echo parameters; however, the availability of prior information about echoes provided additional whale capabilities to adjust both the transmitting and receiving parts of the biosonar.

  13. Parameter estimation, model reduction and quantum filtering

    NASA Astrophysics Data System (ADS)

    Chase, Bradley A.

    This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving

  14. Parameter optimization in S-system models

    PubMed Central

    Vilela, Marco; Chou, I-Chun; Vinga, Susana; Vasconcelos, Ana Tereza R; Voit, Eberhard O; Almeida, Jonas S

    2008-01-01

    Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well. PMID:18416837

  15. Modeling fluvial incision and transient landscape evolution: Influence of dynamic channel adjustment

    NASA Astrophysics Data System (ADS)

    Attal, M.; Tucker, G. E.; Whittaker, A. C.; Cowie, P. A.; Roberts, G. P.

    2008-09-01

    Channel geometry exerts a fundamental control on fluvial processes. Recent work has shown that bedrock channel width depends on a number of parameters, including channel slope, and is not solely a function of drainage area as is commonly assumed. The present work represents the first attempt to investigate the consequences of dynamic, gradient-sensitive channel adjustment for drainage-basin evolution. We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to analyze the response of a catchment to a given tectonic perturbation, using, as a template, the topography of a well-documented catchment in the footwall of an active normal fault in the Apennines (Italy) that is known to be undergoing a transient response to tectonic forcing. We show that the observed transient response can be reproduced to first order with a simple detachment-limited fluvial incision law. Transient landscape is characterized by gentler gradients and a shorter response time when dynamic channel adjustment is allowed. The differences in predicted channel geometry between the static case (width dependent solely on upstream area) and dynamic case (width dependent on both drainage area and channel slope) lead to contrasting landscape morphologies when integrated at the scale of a whole catchment, particularly in presence of strong tilting and/or pronounced slip-rate acceleration. Our results emphasize the importance of channel width in controlling fluvial processes and landscape evolution. They stress the need for using a dynamic hydraulic scaling law when modeling landscape evolution, particularly when the relative uplift field is nonuniform.

  16. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  17. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    NASA Astrophysics Data System (ADS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  18. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    SciTech Connect

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V. Tkachenko, N. P.

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  19. Moose models with vanishing S parameter

    SciTech Connect

    Casalbuoni, R.; De Curtis, S.; Dominici, D.

    2004-09-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2){sub L} and U(1){sub Y} at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric.

  20. Model parameters for simulation of physiological lipids

    PubMed Central

    McGlinchey, Nicholas

    2016-01-01

    Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972

  1. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    USGS Publications Warehouse

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  2. Microwave and infrared spectra, adjusted r0 structural parameters, conformational stabilities, vibrational assignments, and theoretical calculations of cyclobutylcarboxylic acid chloride.

    PubMed

    Klaassen, Joshua J; Darkhalil, Ikhlas D; Deodhar, Bhushan S; Gounev, Todor K; Gurusinghe, Ranil M; Tubergen, Michael J; Groner, Peter; Durig, James R

    2013-08-01

    The FT-microwave spectrum of cyclobutylcarboxylic acid chloride, c-C4H7C(O)Cl, has been recorded and 153 transitions for the (35)Cl and (37)Cl isotopologues have been assigned for the gauche-equatorial (g-Eq) conformation. The ground state rotational constants were determined for (35)Cl [(37)Cl]: A = 4349.8429(25) [4322.0555(56)] MHz, B = 1414.8032(25) [1384.5058(25)] MHz, and C = 1148.2411(25) [1126.3546(25)] MHz. From these rotational constants and ab initio predicted parameters, adjusted r0 parameters are reported with distances (Å) rCα-C = 1.491(4), rC═O = 1.193(3), rCα-Cβ = 1.553(4), rCα-Cβ' = 1.540(4), rCγ-Cβ = 1.547(4), rCγ-Cβ' = 1.546(4), rC-Cl = 1.801(3) and angles (deg) τCγCβCβ'Cα = 30.9(5). Variable temperature (-70 to -100 °C) infrared spectra (4000 to 400 cm(-1)) were recorded in liquid xenon and the g-Eq conformer was determined the most stable form, with enthalpy differences of 91 ± 9 cm(-1) (1.09 ± 0.11 kJ/mol) for the gauche-axial (g-Ax) form and 173 ± 17 cm(-1) (2.07 ± 0.20 kJ/mol) for the trans-equatorial (t-Eq) conformer. The relative amounts at ambient temperature are 54% g-Eq, 35 ± 1% g-Ax, and 12 ± 1% t-Eq forms. Vibrational assignments have been provided for the three conformers and theoretical calculations were carried out. The results are discussed and compared to corresponding properties of related molecules.

  3. Multiscale modeling of failure in composites under model parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Bogdanor, Michael J.; Oskay, Caglar; Clay, Stephen B.

    2015-09-01

    This manuscript presents a multiscale stochastic failure modeling approach for fiber reinforced composites. A homogenization based reduced-order multiscale computational model is employed to predict the progressive damage accumulation and failure in the composite. Uncertainty in the composite response is modeled at the scale of the microstructure by considering the constituent material (i.e., matrix and fiber) parameters governing the evolution of damage as random variables. Through the use of the multiscale model, randomness at the constituent scale is propagated to the scale of the composite laminate. The probability distributions of the underlying material parameters are calibrated from unidirectional composite experiments using a Bayesian statistical approach. The calibrated multiscale model is exercised to predict the ultimate tensile strength of quasi-isotropic open-hole composite specimens at various loading rates. The effect of random spatial distribution of constituent material properties on the composite response is investigated.

  4. Gait parameter adjustments for walking on a treadmill at preferred, slower, and faster speeds in older adults with down syndrome.

    PubMed

    Smith, Beth A; Kubo, Masayoshi; Ulrich, Beverly D

    2012-01-01

    The combined effects of ligamentous laxity, hypotonia, and decrements associated with aging lead to stability-enhancing foot placement adaptations during routine overground walking at a younger age in adults with Down syndrome (DS) compared to their peers with typical development (TD). Our purpose here was to examine real-time adaptations in older adults with DS by testing their responses to walking on a treadmill at their preferred speed and at speeds slower and faster than preferred. We found that older adults with DS were able to adapt their gait to slower and faster than preferred treadmill speeds; however, they maintained their stability-enhancing foot placements at all speeds compared to their peers with TD. All adults adapted their gait patterns similarly in response to faster and slower than preferred treadmill-walking speeds. They increased stride frequency and stride length, maintained step width, and decreased percent stance as treadmill speed increased. Older adults with DS, however, adjusted their stride frequencies significantly less than their peers with TD. Our results show that older adults with DS have the capacity to adapt their gait parameters in response to different walking speeds while also supporting the need for intervention to increase gait stability.

  5. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    SciTech Connect

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  6. Development of a winter wheat adjustable crop calendar model

    NASA Technical Reports Server (NTRS)

    Baker, J. R. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. After parameter estimation, tests were conducted with variances from the fits, and on independent data. From these tests, it was generally concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson's triquadratic form, in general use for spring wheat, was found to show promise for winter wheat, but special techniques and care were required for its use. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with daily environmental values as independent variables.

  7. Fixing the c Parameter in the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…

  8. A New Climate Adjustment Tool: An update to EPA’s Storm Water Management Model

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations.

  9. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, Anne B.; Lizarraga, Joy S.

    1996-01-01

    Statistical operations termed model-adjustment procedures can be used to incorporate local data into existing regression modes to improve the predication of urban-runoff quality. Each procedure is a form of regression analysis in which the local data base is used as a calibration data set; the resulting adjusted regression models can then be used to predict storm-runoff quality at unmonitored sites. Statistical tests of the calibration data set guide selection among proposed procedures.

  10. Modeling of an Adjustable Beam Solid State Light Project

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  11. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  12. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data.

    PubMed

    Kendall, William L; White, Gary C; Hines, James E; Langtimm, Catherine A; Yoshizaki, Jun

    2012-04-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last 20 years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected-value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We have also implemented these models in program MARK. This general framework could also be used by practitioners to consider constrained models of particular interest, or to model the relationship between within-primary-period parameters (e.g., state structure) and between-primary-period parameters (e.g., state transition probabilities). PMID:22690641

  13. Transfer function modeling of damping mechanisms in distributed parameter models

    NASA Technical Reports Server (NTRS)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  14. Development of a charge adjustment model for cardiac catheterization.

    PubMed

    Brennan, Andrew; Gauvreau, Kimberlee; Connor, Jean; O'Connell, Cheryl; David, Sthuthi; Almodovar, Melvin; DiNardo, James; Banka, Puja; Mayer, John E; Marshall, Audrey C; Bergersen, Lisa

    2015-02-01

    A methodology that would allow for comparison of charges across institutions has not been developed for catheterization in congenital heart disease. A single institution catheterization database with prospectively collected case characteristics was linked to hospital charges related and limited to an episode of care in the catheterization laboratory for fiscal years 2008-2010. Catheterization charge categories (CCC) were developed to group types of catheterization procedures using a combination of empiric data and expert consensus. A multivariable model with outcome charges was created using CCC and additional patient and procedural characteristics. In 3 fiscal years, 3,839 cases were available for analysis. Forty catheterization procedure types were categorized into 7 CCC yielding a grouper variable with an R (2) explanatory value of 72.6%. In the final CCC, the largest proportion of cases was in CCC 2 (34%), which included diagnostic cases without intervention. Biopsy cases were isolated in CCC 1 (12%), and percutaneous pulmonary valve placement alone made up CCC 7 (2%). The final model included CCC, number of interventions, and cardiac diagnosis (R (2) = 74.2%). Additionally, current financial metrics such as APR-DRG severity of illness and case mix index demonstrated a lack of correlation with CCC. We have developed a catheterization procedure type financial grouper that accounts for the diverse case population encountered in catheterization for congenital heart disease. CCC and our multivariable model could be used to understand financial characteristics of a population at a single point in time, longitudinally, and to compare populations.

  15. Development of a charge adjustment model for cardiac catheterization.

    PubMed

    Brennan, Andrew; Gauvreau, Kimberlee; Connor, Jean; O'Connell, Cheryl; David, Sthuthi; Almodovar, Melvin; DiNardo, James; Banka, Puja; Mayer, John E; Marshall, Audrey C; Bergersen, Lisa

    2015-02-01

    A methodology that would allow for comparison of charges across institutions has not been developed for catheterization in congenital heart disease. A single institution catheterization database with prospectively collected case characteristics was linked to hospital charges related and limited to an episode of care in the catheterization laboratory for fiscal years 2008-2010. Catheterization charge categories (CCC) were developed to group types of catheterization procedures using a combination of empiric data and expert consensus. A multivariable model with outcome charges was created using CCC and additional patient and procedural characteristics. In 3 fiscal years, 3,839 cases were available for analysis. Forty catheterization procedure types were categorized into 7 CCC yielding a grouper variable with an R (2) explanatory value of 72.6%. In the final CCC, the largest proportion of cases was in CCC 2 (34%), which included diagnostic cases without intervention. Biopsy cases were isolated in CCC 1 (12%), and percutaneous pulmonary valve placement alone made up CCC 7 (2%). The final model included CCC, number of interventions, and cardiac diagnosis (R (2) = 74.2%). Additionally, current financial metrics such as APR-DRG severity of illness and case mix index demonstrated a lack of correlation with CCC. We have developed a catheterization procedure type financial grouper that accounts for the diverse case population encountered in catheterization for congenital heart disease. CCC and our multivariable model could be used to understand financial characteristics of a population at a single point in time, longitudinally, and to compare populations. PMID:25113520

  16. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    NASA Astrophysics Data System (ADS)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-01

    A self-adaptive genetic algorithm for estimating Jiles-Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet's hysteresis loops, and the results are in good agreement with published data.

  17. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  18. Re-estimating temperature-dependent consumption parameters in bioenergetics models for juvenile Chinook salmon

    USGS Publications Warehouse

    Plumb, John M.; Moffitt, Christine M.

    2015-01-01

    Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.

  19. On the hydrologic adjustment of climate-model projections: The potential pitfall of potential evapotranspiration

    USGS Publications Warehouse

    Milly, P.C.D.; Dunne, K.A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.

  20. Multi-objective global sensitivity analysis of the WRF model parameters

    NASA Astrophysics Data System (ADS)

    Quan, Jiping; Di, Zhenhua; Duan, Qingyun; Gong, Wei; Wang, Chen

    2015-04-01

    Tuning model parameters to match model simulations with observations can be an effective way to enhance the performance of numerical weather prediction (NWP) models such as Weather Research and Forecasting (WRF) model. However, this is a very complicated process as a typical NWP model involves many model parameters and many output variables. One must take a multi-objective approach to ensure all of the major simulated model outputs are satisfactory. This talk presents the results of an investigation of multi-objective parameter sensitivity analysis of the WRF model to different model outputs, including conventional surface meteorological variables such as precipitation, surface temperature, humidity and wind speed, as well as atmospheric variables such as total precipitable water, cloud cover, boundary layer height and outgoing long radiation at the top of the atmosphere. The goal of this study is to identify the most important parameters that affect the predictive skill of short-range meteorological forecasts by the WRF model. The study was performed over the Greater Beijing Region of China. A total of 23 adjustable parameters from seven different physical parameterization schemes were considered. Using a multi-objective global sensitivity analysis method, we examined the WRF model parameter sensitivities to the 5-day simulations of the aforementioned model outputs. The results show that parameter sensitivities vary with different model outputs. But three to four of the parameters are shown to be sensitive to all model outputs considered. The sensitivity results from this research can be the basis for future model parameter optimization of the WRF model.

  1. On the Hydrologic Adjustment of Climate-Model Projections: The Potential Pitfall of Potential Evapotranspiration

    USGS Publications Warehouse

    Milly, Paul C.D.; Dunne, Krista A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median -11%) caused by the hydrologic model’s apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen–Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors’ findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climate-change impacts on water.

  2. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  3. Glacial isostatic adjustment model with composite 3-D Earth rheology for Fennoscandia

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Barnhoorn, Auke; Stocchi, Paolo; Gradmann, Sofie; Wu, Patrick; Drury, Martyn; Vermeersen, Bert

    2013-07-01

    Models for glacial isostatic adjustment (GIA) can provide constraints on rheology of the mantle if past ice thickness variations are assumed to be known. The Pleistocene ice loading histories that are used to obtain such constraints are based on an a priori 1-D mantle viscosity profile that assumes a single deformation mechanism for mantle rocks. Such a simplified viscosity profile makes it hard to compare the inferred mantle rheology to inferences from seismology and laboratory experiments. It is unknown what constraints GIA observations can provide on more realistic mantle rheology with an ice history that is not based on an a priori mantle viscosity profile. This paper investigates a model for GIA with a new ice history for Fennoscandia that is constrained by palaeoclimate proxies and glacial sediments. Diffusion and dislocation creep flow law data are taken from a compilation of laboratory measurements on olivine. Upper-mantle temperature data sets down to 400 km depth are derived from surface heatflow measurements, a petrochemical model for Fennoscandia and seismic velocity anomalies. Creep parameters below 400 km are taken from an earlier study and are only varying with depth. The olivine grain size and water content (a wet state, or a dry state) are used as free parameters. The solid Earth response is computed with a global spherical 3-D finite-element model for an incompressible, self-gravitating Earth. We compare predictions to sea level data and GPS uplift rates in Fennoscandia. The objective is to see if the mantle rheology and the ice model is consistent with GIA observations. We also test if the inclusion of dislocation creep gives any improvements over predictions with diffusion creep only, and whether the laterally varying temperatures result in an improved fit compared to a widely used 1-D viscosity profile (VM2). We find that sea level data can be explained with our ice model and with information on mantle rheology from laboratory experiments

  4. Assessment and indirect adjustment for confounding by smoking in cohort studies using relative hazards models.

    PubMed

    Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R

    2014-11-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.

  5. Assessment and Indirect Adjustment for Confounding by Smoking in Cohort Studies Using Relative Hazards Models

    PubMed Central

    Richardson, David B.; Laurier, Dominique; Schubauer-Berigan, Mary K.; Tchetgen, Eric Tchetgen; Cole, Stephen R.

    2014-01-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950–2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950–2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer—a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented. PMID:25245043

  6. Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure

    ERIC Educational Resources Information Center

    Vandsburger, Etty; Biggerstaff, Marilyn A.

    2004-01-01

    This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…

  7. A Model of Divorce Adjustment for Use in Family Service Agencies.

    ERIC Educational Resources Information Center

    Faust, Ruth Griffith

    1987-01-01

    Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…

  8. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    ERIC Educational Resources Information Center

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  9. Suggestion of a Numerical Model for the Blood Glucose Adjustment with Ingesting a Food

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naokatsu; Takai, Hiroshi

    In this study, we present a numerical model of the time dependence of blood glucose value after ingesting a meal. Two numerical models are proposed in this paper to explain a digestion mechanism and an adjustment mechanism of blood glucose in the body, respectively. It is considered that models are exhibited by using simple equations with a transfer function and a block diagram. Additionally, the time dependence of blood glucose was measured, when subjects ingested a sucrose or a starch. As a result, it is clear that the calculated result of models using a computer can be fitted very well to the measured result of the time dependence of blood glucose. Therefore, it is considered that the digestion model and the adjustment model are useful models in order to estimate a blood glucose value after ingesting meals.

  10. Modeling Fluvial Incision and Transient Landscape Evolution: Influence of Dynamic Channel Adjustment

    NASA Astrophysics Data System (ADS)

    Attal, M.; Tucker, G. E.; Cowie, P. A.; Whittaker, A. C.; Roberts, G. P.

    2007-12-01

    Channel geometry exerts a fundamental control on fluvial processes. Recent work has shown that bedrock channel width (W) depends on a number of parameters, including channel slope, and is not only a function of drainage area (A) as is commonly assumed. The present work represents the first attempt to investigate the consequences, for landscape evolution, of using a static expression of channel width (W ~ A0.5) versus a relationship that allows channels to dynamically adjust to changes in slope. We consider different models for the evolution of the channel geometry, including constant width-to-depth ratio (after Finnegan et al., Geology, v. 33, no. 3, 2005), and width-to-depth ratio varying as a function of slope (after Whittaker et al., Geology, v. 35, no. 2, 2007). We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to analyze the response of a catchment to a given tectonic disturbance. The topography of a catchment in the footwall of an active normal fault in the Apennines (Italy) is used as a template for the study. We show that, for this catchment, the transient response can be fairly well reproduced using a simple detachment-limited fluvial incision law. We also show that, depending on the relationship used to express channel width, initial steady-state topographies differ, as do transient channel width, slope, and the response time of the fluvial system. These differences lead to contrasting landscape morphologies when integrated at the scale of a whole catchment. Our results emphasize the importance of channel width in controlling fluvial processes and landscape evolution. They stress the need for using a dynamic hydraulic scaling law when modeling landscape evolution, particularly when the uplift field is non-uniform.

  11. Testing a developmental cascade model of adolescent substance use trajectories and young adult adjustment

    PubMed Central

    LYNNE-LANDSMAN, SARAH D.; BRADSHAW, CATHERINE P.; IALONGO, NICHOLAS S.

    2013-01-01

    Developmental models highlight the impact of early risk factors on both the onset and growth of substance use, yet few studies have systematically examined the indirect effects of risk factors across several domains, and at multiple developmental time points, on trajectories of substance use and adult adjustment outcomes (e.g., educational attainment, mental health problems, criminal behavior). The current study used data from a community epidemiologically defined sample of 678 urban, primarily African American youth, followed from first grade through young adulthood (age 21) to test a developmental cascade model of substance use and young adult adjustment outcomes. Drawing upon transactional developmental theories and using growth mixture modeling procedures, we found evidence for a developmental progression from behavioral risk to adjustment problems in the peer context, culminating in a high-risk trajectory of alcohol, cigarette, and marijuana use during adolescence. Substance use trajectory membership was associated with adjustment in adulthood. These findings highlight the developmental significance of early individual and interpersonal risk factors on subsequent risk for substance use and, in turn, young adult adjustment outcomes. PMID:20883591

  12. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension. PMID:27301005

  13. Parameter redundancy in discrete state‐space and integrated models

    PubMed Central

    McCrea, Rachel S.

    2016-01-01

    Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826

  14. Parameter redundancy in discrete state-space and integrated models.

    PubMed

    Cole, Diana J; McCrea, Rachel S

    2016-09-01

    Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant.

  15. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  16. Adjoint method for estimating Jiles-Atherton hysteresis model parameters

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Hansen, Paul C.; Neustock, Lars T.; Padhy, Punnag; Hesselink, Lambertus

    2016-09-01

    A computationally efficient method for identifying the parameters of the Jiles-Atherton hysteresis model is presented. Adjoint analysis is used in conjecture with an accelerated gradient descent optimization algorithm. The proposed method is used to estimate the Jiles-Atherton model parameters of two different materials. The obtained results are found to be in good agreement with the reported values. By comparing with existing methods of model parameter estimation, the proposed method is found to be computationally efficient and fast converging.

  17. Risk-adjusted outcome models for public mental health outpatient programs.

    PubMed Central

    Hendryx, M S; Dyck, D G; Srebnik, D

    1999-01-01

    OBJECTIVE: To develop and test risk-adjustment outcome models in publicly funded mental health outpatient settings. We developed prospective risk models that used demographic and diagnostic variables; client-reported functioning, satisfaction, and quality of life; and case manager clinical ratings to predict subsequent client functional status, health-related quality of life, and satisfaction with services. DATA SOURCES/STUDY SETTING: Data collected from 289 adult clients at five- and ten-month intervals, from six community mental health agencies in Washington state located primarily in suburban and rural areas. Data sources included client self-report, case manager ratings, and management information system data. STUDY DESIGN: Model specifications were tested using prospective linear regression analyses. Models were validated in a separate sample and comparative agency performance examined. PRINCIPAL FINDINGS: Presence of severe diagnoses, substance abuse, client age, and baseline functional status and quality of life were predictive of mental health outcomes. Unadjusted versus risk-adjusted scores resulted in differently ranked agency performance. CONCLUSIONS: Risk-adjusted functional status and patient satisfaction outcome models can be developed for public mental health outpatient programs. Research is needed to improve the predictive accuracy of the outcome models developed in this study, and to develop techniques for use in applied settings. The finding that risk adjustment changes comparative agency performance has important consequences for quality monitoring and improvement. Issues in public mental health risk adjustment are discussed, including static versus dynamic risk models, utilization versus outcome models, choice and timing of measures, and access and quality improvement incentives. PMID:10201857

  18. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  19. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    SciTech Connect

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  20. Automatic parameter adjustment of difference of Gaussian (DoG) filter to improve OT-MACH filter performance for target recognition applications

    NASA Astrophysics Data System (ADS)

    Alkandri, Ahmad; Gardezi, Akber; Bangalore, Nagachetan; Birch, Philip; Young, Rupert; Chatwin, Chris

    2011-11-01

    A wavelet-modified frequency domain Optimal Trade-off Maximum Average Correlation Height (OT-MACH) filter has been trained using 3D CAD models and tested on real target images acquired from a Forward Looking Infra Red (FLIR) sensor. The OT-MACH filter can be used to detect and discriminate predefined targets from a cluttered background. The FLIR sensor extends the filter's ability by increasing the range of detection by exploiting the heat signature differences between the target and the background. A Difference of Gaussians (DoG) based wavelet filter has been use to improve the OT-MACH filter discrimination ability and distortion tolerance. Choosing the right standard deviation values of the two Gaussians comprising the filter is critical. In this paper we present a new technique for auto adjustment of the DoG filter parameters driven by the expected target size. Tests were carried on images acquired by the Apache AH-64 helicopter mounted FLIR sensor, results showing an overall improvement in the recognition of target objects present within the IR images.

  1. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the

  2. Model-Based MR Parameter Mapping with Sparsity Constraints: Parameter Estimation and Performance Bounds

    PubMed Central

    Zhao, Bo; Lam, Fan; Liang, Zhi-Pei

    2014-01-01

    MR parameter mapping (e.g., T1 mapping, T2 mapping, T2∗ mapping) is a valuable tool for tissue characterization. However, its practical utility has been limited due to long data acquisition times. This paper addresses this problem with a new model-based parameter mapping method. The proposed method utilizes a formulation that integrates the explicit signal model with sparsity constraints on the model parameters, enabling direct estimation of the parameters of interest from highly undersampled, noisy k-space data. An efficient greedy-pursuit algorithm is described to solve the resulting constrained parameter estimation problem. Estimation-theoretic bounds are also derived to analyze the benefits of incorporating sparsity constraints and benchmark the performance of the proposed method. The theoretical properties and empirical performance of the proposed method are illustrated in a T2 mapping application example using computer simulations. PMID:24833520

  3. Testing a social ecological model for relations between political violence and child adjustment in Northern Ireland.

    PubMed

    Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-05-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.

  4. A reassessment of the PRIMO recommendations for adjustments to mid-latitude ionospheric models

    NASA Astrophysics Data System (ADS)

    David, M.; Sojka, J. J.; Schunk, R. W.

    2012-12-01

    In the late 1990s, in response to the realization that ionospheric physical models tended to underestimate the dayside peak F-region electron density (NmF2) by about a factor of 2, a group of modelers convened to find out why. The project was dubbed PRIMO, standing for Problems Relating to Ionospheric Models and Observations. Five ionospheric models were employed in the original study, including the Utah State University Time Dependent Ionospheric Model (TDIM), which is the focus of the present study. No physics-based explanation was put forward for the models' shortcomings, but there was a recommendation that three adjustments be made within the models: 1) The inclusion of a Burnside factor of 1.7 for the diffusion coefficients; 2) that the branching ratio of O+ be changed from 0.38 to 0.25; and 3) that the dayside ion production rates be scaled upward to account for ionization by secondary photons. The PRIMO recommendations were dutifully included in our TDIM model at Utah State University, though as time went on, and particularly while modeling the ionosphere during the International Polar Year (2007), it became clear that the PRIMO adjustments sometimes caused the model to produce excessively high dayside electron densities. As the original PRIMO study [Anderson et al, 1998] was based upon model/observation comparison over a very limited set of observations from just one station (Millstone Hill, Massachusetts), we have expanded the range of the study, taking advantage of resources that were not available 12 years ago, most notably the NGDC SPIDR Internet data base, and faster computers for running large numbers of simulations with the TDIM model. We look at ionosonde measurements of the peak dayside electron densities at mid-latitudes around the world, across the full range of seasons and solar cycles, as well as levels of geomagnetic activity, in order to determine at which times the PRIMO adjustments should be included in the model, and when it is best not to

  5. Two Models of Caregiver Strain and Bereavement Adjustment: A Comparison of Husband and Daughter Caregivers of Breast Cancer Hospice Patients

    ERIC Educational Resources Information Center

    Bernard, Lori L.; Guarnaccia, Charles A.

    2003-01-01

    Purpose: Caregiver bereavement adjustment literature suggests opposite models of impact of role strain on bereavement adjustment after care-recipient death--a Complicated Grief Model and a Relief Model. This study tests these competing models for husband and adult-daughter caregivers of breast cancer hospice patients. Design and Methods: This…

  6. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    PubMed

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.

  7. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  8. Refining a Multidimensional Model of Community Adjustment through an Analysis of Postschool Follow-Up Data.

    ERIC Educational Resources Information Center

    Thompson, James R.; McGrew, Kevin S.; Johnson, David R.; Bruininks, Robert H.

    2000-01-01

    Survey data were collected on the life experiences and status of 388 young adults with disabilities out of school for 1 to 5 years. Results support a 7-factor model of community adjustment: personal satisfaction, employment-economic integration, community assimilation, need for support services, recreation-leisure integration, social network…

  9. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    ERIC Educational Resources Information Center

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  10. A Study of Perfectionism, Attachment, and College Student Adjustment: Testing Mediational Models.

    ERIC Educational Resources Information Center

    Hood, Camille A.; Kubal, Anne E.; Pfaller, Joan; Rice, Kenneth G.

    Mediational models predicting college students' adjustment were tested using regression analyses. Contemporary adult attachment theory was employed to explore the cognitive/affective mechanisms by which adult attachment and perfectionism affect various aspects of psychological functioning. Consistent with theoretical expectations, results…

  11. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  12. Transferability of calibrated microsimulation model parameters for safety assessment using simulated conflicts.

    PubMed

    Essa, Mohamed; Sayed, Tarek

    2015-11-01

    Several studies have investigated the relationship between field-measured conflicts and the conflicts obtained from micro-simulation models using the Surrogate Safety Assessment Model (SSAM). Results from recent studies have shown that while reasonable correlation between simulated and real traffic conflicts can be obtained especially after proper calibration, more work is still needed to confirm that simulated conflicts provide safety measures beyond what can be expected from exposure. As well, the results have emphasized that using micro-simulation model to evaluate safety without proper model calibration should be avoided. The calibration process adjusts relevant simulation parameters to maximize the correlation between field-measured and simulated conflicts. The main objective of this study is to investigate the transferability of calibrated parameters of the traffic simulation model (VISSIM) for safety analysis between different sites. The main purpose is to examine whether the calibrated parameters, when applied to other sites, give reasonable results in terms of the correlation between the field-measured and the simulated conflicts. Eighty-three hours of video data from two signalized intersections in Surrey, BC were used in this study. Automated video-based computer vision techniques were used to extract vehicle trajectories and identify field-measured rear-end conflicts. Calibrated VISSIM parameters obtained from the first intersection which maximized the correlation between simulated and field-observed conflicts were used to estimate traffic conflicts at the second intersection and to compare the results to parameters optimized specifically for the second intersection. The results show that the VISSIM parameters are generally transferable between the two locations as the transferred parameters provided better correlation between simulated and field-measured conflicts than using the default VISSIM parameters. Of the six VISSIM parameters identified as

  13. Parameter estimation in deformable models using Markov chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Haynor, David R.; Sampson, Paul D.; Kim, Yongmin

    1997-04-01

    Deformable models have gained much popularity recently for many applications in medical imaging, such as image segmentation, image reconstruction, and image registration. Such models are very powerful because various kinds of information can be integrated together in an elegant statistical framework. Each such piece of information is typically associated with a user-defined parameter. The values of these parameters can have a significant effect on the results generated using these models. Despite the popularity of deformable models for various applications, not much attention has been paid to the estimation of these parameters. In this paper we describe systematic methods for the automatic estimation of these deformable model parameters. These methods are derived by posing the deformable models as a Bayesian inference problem. Our parameter estimation methods use Markov chain Monte Carlo methods for generating samples from highly complex probability distributions.

  14. Agricultural and Environmental Input Parameters for the Biosphere Model

    SciTech Connect

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  15. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    SciTech Connect

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming; Carnes, Brian; Chen, Ken Shuang

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated in order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.

  16. Linear regression models of floor surface parameters on friction between Neolite and quarry tiles.

    PubMed

    Chang, Wen-Ruey; Matz, Simon; Grönqvist, Raoul; Hirvonen, Mikko

    2010-01-01

    For slips and falls, friction is widely used as an indicator of surface slipperiness. Surface parameters, including surface roughness and waviness, were shown to influence friction by correlating individual surface parameters with the measured friction. A collective input from multiple surface parameters as a predictor of friction, however, could provide a broader perspective on the contributions from all the surface parameters evaluated. The objective of this study was to develop regression models between the surface parameters and measured friction. The dynamic friction was measured using three different mixtures of glycerol and water as contaminants. Various surface roughness and waviness parameters were measured using three different cut-off lengths. The regression models indicate that the selected surface parameters can predict the measured friction coefficient reliably in most of the glycerol concentrations and cut-off lengths evaluated. The results of the regression models were, in general, consistent with those obtained from the correlation between individual surface parameters and the measured friction in eight out of nine conditions evaluated in this experiment. A hierarchical regression model was further developed to evaluate the cumulative contributions of the surface parameters in the final iteration by adding these parameters to the regression model one at a time from the easiest to measure to the most difficult to measure and evaluating their impacts on the adjusted R(2) values. For practical purposes, the surface parameter R(a) alone would account for the majority of the measured friction even if it did not reach a statistically significant level in some of the regression models.

  17. On retrial queueing model with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Ke, Jau-Chuan; Huang, Hsin-I.; Lin, Chuen-Horng

    2007-01-01

    This work constructs the membership functions of the system characteristics of a retrial queueing model with fuzzy customer arrival, retrial and service rates. The α-cut approach is used to transform a fuzzy retrial-queue into a family of conventional crisp retrial queues in this context. By means of the membership functions of the system characteristics, a set of parametric non-linear programs is developed to describe the family of crisp retrial queues. A numerical example is solved successfully to illustrate the validity of the proposed approach. Because the system characteristics are expressed and governed by the membership functions, more information is provided for use by management. By extending this model to the fuzzy environment, fuzzy retrial-queue is represented more accurately and analytic results are more useful for system designers and practitioners.

  18. A simulation of water pollution model parameter estimation

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  19. A Logical Difficulty of the Parameter Setting Model.

    ERIC Educational Resources Information Center

    Sasaki, Yoshinori

    1990-01-01

    Seeks to prove that the parameter setting model (PSM) of Chomsky's Universal Grammar theory contains an internal contradiction when it is seriously taken to model the internal state of language learners. (six references) (JL)

  20. Determining extreme parameter correlation in ground water models.

    USGS Publications Warehouse

    Hill, M.C.; Osterby, O.

    2003-01-01

    In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation can go undetected even by experienced modelers. Extreme parameter correlation can be detected using parameter correlation coefficients, but their utility depends on the presence of sufficient, but not excessive, numerical imprecision of the sensitivities, such as round-off error. This work investigates the information that can be obtained from parameter correlation coefficients in the presence of different levels of numerical imprecision, and compares it to the information provided by an alternative method called the singular value decomposition (SVD). Results suggest that (1) calculated correlation coefficients with absolute values that round to 1.00 were good indicators of extreme parameter correlation, but smaller values were not necessarily good indicators of lack of correlation and resulting unique parameter estimates; (2) the SVD may be more difficult to interpret than parameter correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters were more equally sensitive. When the statistical measures fail, parameter correlation can be identified only by the tedious process of executing regression using different sets of starting values, or, in some circumstances, through graphs of the objective function.

  1. On Interpreting the Parameters for Any Item Response Model

    ERIC Educational Resources Information Center

    Thissen, David

    2009-01-01

    Maris and Bechger's article is an exercise in technical virtuosity and provides much to be learned by students of psychometrics. In this commentary, the author begins with making two observations. The first is that the title, "On Interpreting the Model Parameters for the Three Parameter Logistic Model," belies the generality of parts of Maris and…

  2. Exploring the interdependencies between parameters in a material model.

    SciTech Connect

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  3. Influences of parameter uncertainties within the ICRP-66 respiratory tract model: a parameter sensitivity analysis.

    PubMed

    Huston, Thomas E; Farfán, Eduardo B; Bolch, W Emmett; Bolch, Wesley E

    2003-11-01

    An important aspect in model uncertainty analysis is the evaluation of input parameter sensitivities with respect to model outcomes. In previous publications, parameter uncertainties were examined for the ICRP-66 respiratory tract model. The studies were aided by the development and use of a computer code LUDUC (Lung Dose Uncertainty Code) which allows probabilities density functions to be specified for all ICRP-66 model input parameters. These density functions are sampled using Latin hypercube techniques with values subsequently propagated through the ICRP-66 model. In the present study, LUDUC has been used to perform a detailed parameter sensitivity analysis of the ICRP-66 model using input parameter density functions specified in previously published articles. The results suggest that most of the variability in the dose to a given target region is explained by only a few input parameters. For example, for particle diameters between 0.1 and 50 microm, about 50% of the variability in the total lung dose (weighted sum of target tissue doses) for 239PuO2 is due to variability in the dose to the alveolar-interstitial (AI) region. In turn, almost 90% of the variability in the dose to the AI region is attributable to uncertainties in only four parameters in the model: the ventilation rate, the AI deposition fraction, the clearance rate constant for slow-phase absorption of deposited material to the blood, and the clearance rate constant for particle transport from the AI2 to bb1 compartment. A general conclusion is that many input parameters do not significantly influence variability in final doses. As a result, future research can focus on improving density functions for those input variables that contribute the most to variability in final dose values. PMID:14571988

  4. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  5. Biological parameters for lung cancer in mathematical models of carcinogenesis.

    PubMed

    Jacob, P; Jacob, V

    2003-01-01

    Applications of the two-step model of carcinogenesis with clonal expansion (TSCE) to lung cancer data are reviewed, including those on atomic bomb survivors from Hiroshima and Nagasaki. British doctors, Colorado Plateau miners and Chinese tin miners. Different sets of identifiable model parameters are used in the literature. The parameter set which could be determined with the lowest uncertainty consists of the net proliferation rate gamma of intermediate cells, the hazard h55 at an intermediate age and the hazard h(infinity) at an asymptotically large age. Also, the values of these three parameters obtained in the various studies are more consistent than other identifiable combinations of the biological parameters. Based on representative results for these three parameters, implications for the biological parameters in the TSCE model are derived. PMID:14579892

  6. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models.

    PubMed

    Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  7. Estimation of musculotendon parameters for scaled and subject specific musculoskeletal models using an optimization technique.

    PubMed

    Modenese, Luca; Ceseracciu, Elena; Reggiani, Monica; Lloyd, David G

    2016-01-25

    A challenging aspect of subject specific musculoskeletal modeling is the estimation of muscle parameters, especially optimal fiber length and tendon slack length. In this study, the method for scaling musculotendon parameters published by Winby et al. (2008), J. Biomech. 41, 1682-1688, has been reformulated, generalized and applied to two cases of practical interest: 1) the adjustment of muscle parameters in the entire lower limb following linear scaling of a generic model and 2) their estimation "from scratch" in a subject specific model of the hip joint created from medical images. In the first case, the procedure maintained the muscles׳ operating range between models with mean errors below 2.3% of the reference model normalized fiber length value. In the second case, a subject specific model of the hip joint was created using segmented bone geometries and muscle volumes publicly available for a cadaveric specimen from the Living Human Digital Library (LHDL). Estimated optimal fiber lengths were found to be consistent with those of a previously published dataset for all 27 considered muscle bundles except gracilis. However, computed tendon slack lengths differed from tendon lengths measured in the LHDL cadaver, suggesting that tendon slack length should be determined via optimization in subject-specific applications. Overall, the presented methodology could adjust the parameters of a scaled model and enabled the estimation of muscle parameters in newly created subject specific models. All data used in the analyses are of public domain and a tool implementing the algorithm is available at https://simtk.org/home/opt_muscle_par.

  8. Relationship between Cole-Cole model parameters and spectral decomposition parameters derived from SIP data

    NASA Astrophysics Data System (ADS)

    Weigand, M.; Kemna, A.

    2016-06-01

    Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.

  9. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    SciTech Connect

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.

  10. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    DOE PAGES

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less

  11. Testing parameters in structural equation modeling: every "one" matters.

    PubMed

    Gonzalez, R; Griffin, D

    2001-09-01

    A problem with standard errors estimated by many structural equation modeling programs is described. In such programs, a parameter's standard error is sensitive to how the model is identified (i.e., how scale is set). Alternative but equivalent ways to identify a model may yield different standard errors, and hence different Z tests for a parameter, even though the identifications produce the same overall model fit. This lack of invariance due to model identification creates the possibility that different analysts may reach different conclusions about a parameter's significance level even though they test equivalent models on the same data. The authors suggest that parameters be tested for statistical significance through the likelihood ratio test, which is invariant to the identification choice. PMID:11570231

  12. Extraction of exposure modeling parameters of thick resist

    NASA Astrophysics Data System (ADS)

    Liu, Chi; Du, Jinglei; Liu, Shijie; Duan, Xi; Luo, Boliang; Zhu, Jianhua; Guo, Yongkang; Du, Chunlei

    2004-12-01

    Experimental and theoretical analysis indicates that many nonlinear factors existing in the exposure process of thick resist can remarkably affect the PAC concentration distribution in the resist. So the effects should be fully considered in the exposure model of thick resist, and exposure parameters should not be treated as constants because there exists certain relationship between the parameters and resist thickness. In this paper, an enhanced Dill model for the exposure process of thick resist is presented, and the experimental setup for measuring exposure parameters of thick resist is developed. We measure the intensity transmittance curve of thick resist AZ4562 under different processing conditions, and extract the corresponding exposure parameters based on the experiment results and the calculations from the beam propagation matrix of the resist films. With these modified modeling parameters and enhanced Dill model, simulation of thick-resist exposure process can be effectively developed in the future.

  13. Executive function and psychosocial adjustment in healthy children and adolescents: A latent variable modelling investigation.

    PubMed

    Cassidy, Adam R

    2016-01-01

    The objective of this study was to establish latent executive function (EF) and psychosocial adjustment factor structure, to examine associations between EF and psychosocial adjustment, and to explore potential development differences in EF-psychosocial adjustment associations in healthy children and adolescents. Using data from the multisite National Institutes of Health (NIH) magnetic resonance imaging (MRI) Study of Normal Brain Development, the current investigation examined latent associations between theoretically and empirically derived EF factors and emotional and behavioral adjustment measures in a large, nationally representative sample of children and adolescents (7-18 years old; N = 352). Confirmatory factor analysis (CFA) was the primary method of data analysis. CFA results revealed that, in the whole sample, the proposed five-factor model (Working Memory, Shifting, Verbal Fluency, Externalizing, and Internalizing) provided a close fit to the data, χ(2)(66) = 114.48, p < .001; RMSEA = .046; NNFI = .973; CFI = .980. Significant negative associations were demonstrated between Externalizing and both Working Memory and Verbal Fluency (p < .01) factors. A series of increasingly restrictive tests led to the rejection of the hypothesis of invariance, thereby precluding formal statistical examination of age-related differences in latent EF-psychosocial adjustment associations. Findings indicate that childhood EF skills are best conceptualized as a constellation of interconnected yet distinguishable cognitive self-regulatory skills. Individual differences in certain domains of EF track meaningfully and in expected directions with emotional and behavioral adjustment indices. Externalizing behaviors, in particular, are associated with latent Working Memory and Verbal Fluency factors. PMID:25569593

  14. Identification of parameters of discrete-continuous models

    SciTech Connect

    Cekus, Dawid Warys, Pawel

    2015-03-10

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.

  15. Estimating parameters for generalized mass action models with connectivity information

    PubMed Central

    Ko, Chih-Lung; Voit, Eberhard O; Wang, Feng-Sheng

    2009-01-01

    Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out on the constrained

  16. Inverse estimation of parameters for an estuarine eutrophication model

    SciTech Connect

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.

  17. Model and Parameter Discretization Impacts on Estimated ASR Recovery Efficiency

    NASA Astrophysics Data System (ADS)

    Forghani, A.; Peralta, R. C.

    2015-12-01

    We contrast computed recovery efficiency of one Aquifer Storage and Recovery (ASR) well using several modeling situations. Test situations differ in employed finite difference grid discretization, hydraulic conductivity, and storativity. We employ a 7-layer regional groundwater model calibrated for Salt Lake Valley. Since the regional model grid is too coarse for ASR analysis, we prepare two local models with significantly smaller discretization capable of analyzing ASR recovery efficiency. Some addressed situations employ parameters interpolated from the coarse valley model. Other situations employ parameters derived from nearby well logs or pumping tests. The intent of the evaluations and subsequent sensitivity analysis is to show how significantly the employed discretization and aquifer parameters affect estimated recovery efficiency. Most of previous studies to evaluate ASR recovery efficiency only consider hypothetical uniform specified boundary heads and gradient assuming homogeneous aquifer parameters. The well is part of the Jordan Valley Water Conservancy District (JVWCD) ASR system, that lies within Salt Lake Valley.

  18. The HHS-HCC risk adjustment model for individual and small group markets under the Affordable Care Act.

    PubMed

    Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia

    2014-01-01

    Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula.

  19. The HHS-HCC risk adjustment model for individual and small group markets under the Affordable Care Act.

    PubMed

    Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia

    2014-01-01

    Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula. PMID:25360387

  20. Parameter identifiability of power-law biochemical system models.

    PubMed

    Srinath, Sridharan; Gunawan, Rudiyanto

    2010-09-01

    Mathematical modeling has become an integral component in biotechnology, in which these models are frequently used to design and optimize bioprocesses. Canonical models, like power-laws within the Biochemical Systems Theory, offer numerous mathematical and numerical advantages, including built-in flexibility to simulate general nonlinear behavior. The construction of such models relies on the estimation of unknown case-specific model parameters by way of experimental data fitting, also known as inverse modeling. Despite the large number of publications on this topic, this task remains the bottleneck in canonical modeling of biochemical systems. The focus of this paper concerns with the question of identifiability of power-law models from dynamic data, that is, whether the parameter values can be uniquely and accurately identified from time-series data. Existing and newly developed parameter identifiability methods were applied to two power-law models of biochemical systems, and the results pointed to the lack of parametric identifiability as the root cause of the difficulty faced in the inverse modeling. Despite the focus on power-law models, the analyses and conclusions are extendable to other canonical models, and the issue of parameter identifiability is expected to be a common problem in biochemical system modeling. PMID:20197073

  1. Estimating winter wheat phenological parameters: Implications for crop modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  2. Stress and Personal Resource as Predictors of the Adjustment of Parents to Autistic Children: A Multivariate Model

    ERIC Educational Resources Information Center

    Siman-Tov, Ayelet; Kaniel, Shlomo

    2011-01-01

    The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…

  3. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  4. Fundamentals, accuracy and input parameters of frost heave prediction models

    NASA Astrophysics Data System (ADS)

    Schellekens, Fons Jozef

    In this thesis, the frost heave knowledge of physical geographers and soil physicists, a detailed description of the frost heave process, methods to determine soil parameters, and analysis of the spatial variability of these soil parameters are connected to the expertise of civil engineers and mathematicians in the (computer) modelling of the process. A description is given of observations of frost heave in laboratory experiments and in the field. Frost heave modelling is made accessible by a detailed description of the main principles of frost heave modelling in a language which can be understood by persons who do not have a thorough mathematical background. Two examples of practical one-dimensional frost heave prediction models are described: a model developed by Wang (1994) and a model developed by Nixon (1991). Advantages, limitations and some improvements of these models are described. It is suggested that conventional frost heave prediction using estimated extreme input parameters may be improved by using locally measured input parameters. The importance of accurate input parameters in frost heave prediction models is demonstrated in a case study using the frost heave models developed by Wang and Nixon. Methods to determine the input parameters are discussed, concluding with a suite of methods, some of which are new, to determine the input parameters of frost heave prediction models from very basic grain size parameters. The spatial variability of the required input parameters is analysed using data obtained along the Norman Wells-Zama oil pipeline at Norman Wells, NWT, located in the transition between discontinuous and continuous permafrost regions at the northern end of Canada's northernmost oil pipeline. A method based on spatial variability analysis of the input parameters in frost heave models is suggested to optimize the improvement that arises from adequate sampling, while minimizing the costs of obtaining field data. A series of frost heave

  5. Retrospective forecast of ETAS model with daily parameters estimate

    NASA Astrophysics Data System (ADS)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  6. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  7. NEFDS contamination model parameter estimation of powder contaminated surfaces

    NASA Astrophysics Data System (ADS)

    Gibbs, Timothy J.; Messinger, David W.

    2016-05-01

    Hyperspectral signatures of powdered contaminated surfaces are challenging to characterize due to intimate mixing between materials. Most radiometric models have difficulties in recreating these signatures due to non-linear interactions between particles with different physical properties. The Nonconventional Exploitation Factors Data System (NEFDS) Contamination Model is capable of recreating longwave hyperspectral signatures at any contamination mixture amount, but only for a limited selection of materials currently in the database. A method has been developed to invert the NEFDS model and perform parameter estimation on emissivity measurements from a variety of powdered materials on substrates. This model was chosen for its potential to accurately determine contamination coverage density as a parameter in the inverted model. Emissivity data were measured using a Designs and Prototypes fourier transform infrared spectrometer model 102 for different levels of contamination. Temperature emissivity separation was performed to convert data from measure radiance to estimated surface emissivity. Emissivity curves were then input into the inverted model and parameters were estimated for each spectral curve. A comparison of measured data with extrapolated model emissivity curves using estimated parameter values assessed performance of the inverted NEFDS contamination model. This paper will present the initial results of the experimental campaign and the estimated surface coverage parameters.

  8. Uncertainty in dual permeability model parameters for structured soils.

    PubMed

    Arora, B; Mohanty, B P; McGuire, J T

    2012-01-01

    Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa ) and macropore tortuosity (lf ) but also of other parameters of the matrix and macropore domains.

  9. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  10. Agricultural and Environmental Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  11. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    SciTech Connect

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  12. Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1997-01-01

    An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.

  13. Numerical modeling of piezoelectric transducers using physical parameters.

    PubMed

    Cappon, Hans; Keesman, Karel J

    2012-05-01

    Design of ultrasonic equipment is frequently facilitated with numerical models. These numerical models, however, need a calibration step, because usually not all characteristics of the materials used are known. Characterization of material properties combined with numerical simulations and experimental data can be used to acquire valid estimates of the material parameters. In our design application, a finite element (FE) model of an ultrasonic particle separator, driven by an ultrasonic transducer in thickness mode, is required. A limited set of material parameters for the piezoelectric transducer were obtained from the manufacturer, thus preserving prior physical knowledge to a large extent. The remaining unknown parameters were estimated from impedance analysis with a simple experimental setup combined with a numerical optimization routine using 2-D and 3-D FE models. Thus, a full set of physically interpretable material parameters was obtained for our specific purpose. The approach provides adequate accuracy of the estimates of the material parameters, near 1%. These parameter estimates will subsequently be applied in future design simulations, without the need to go through an entire series of characterization experiments. Finally, a sensitivity study showed that small variations of 1% in the main parameters caused changes near 1% in the eigenfrequency, but changes up to 7% in the admittance peak, thus influencing the efficiency of the system. Temperature will already cause these small variations in response; thus, a frequency control unit is required when actually manufacturing an efficient ultrasonic separation system.

  14. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  15. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show

  16. An Effective Parameter Screening Strategy for High Dimensional Watershed Models

    NASA Astrophysics Data System (ADS)

    Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.

    2014-12-01

    Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.

  17. Interfacial free energy adjustable phase field crystal model for homogeneous nucleation.

    PubMed

    Guo, Can; Wang, Jincheng; Wang, Zhijun; Li, Junjie; Guo, Yaolin; Huang, Yunhao

    2016-05-18

    To describe the homogeneous nucleation process, an interfacial free energy adjustable phase-field crystal model (IPFC) was proposed by reconstructing the energy functional of the original phase field crystal (PFC) methodology. Compared with the original PFC model, the additional interface term in the IPFC model effectively can adjust the magnitude of the interfacial free energy, but does not affect the equilibrium phase diagram and the interfacial energy anisotropy. The IPFC model overcame the limitation that the interfacial free energy of the original PFC model is much less than the theoretical results. Using the IPFC model, we investigated some basic issues in homogeneous nucleation. From the viewpoint of simulation, we proceeded with an in situ observation of the process of cluster fluctuation and obtained quite similar snapshots to colloidal crystallization experiments. We also counted the size distribution of crystal-like clusters and the nucleation rate. Our simulations show that the size distribution is independent of the evolution time, and the nucleation rate remains constant after a period of relaxation, which are consistent with experimental observations. The linear relation between logarithmic nucleation rate and reciprocal driving force also conforms to the steady state nucleation theory.

  18. Adjusting for Network Size and Composition Effects in Exponential-Family Random Graph Models.

    PubMed

    Krivitsky, Pavel N; Handcock, Mark S; Morris, Martina

    2011-07-01

    Exponential-family random graph models (ERGMs) provide a principled way to model and simulate features common in human social networks, such as propensities for homophily and friend-of-a-friend triad closure. We show that, without adjustment, ERGMs preserve density as network size increases. Density invariance is often not appropriate for social networks. We suggest a simple modification based on an offset which instead preserves the mean degree and accommodates changes in network composition asymptotically. We demonstrate that this approach allows ERGMs to be applied to the important situation of egocentrically sampled data. We analyze data from the National Health and Social Life Survey (NHSLS). PMID:21691424

  19. Adjusting for Network Size and Composition Effects in Exponential-Family Random Graph Models

    PubMed Central

    Krivitsky, Pavel N.; Handcock, Mark S.; Morris, Martina

    2011-01-01

    Exponential-family random graph models (ERGMs) provide a principled way to model and simulate features common in human social networks, such as propensities for homophily and friend-of-a-friend triad closure. We show that, without adjustment, ERGMs preserve density as network size increases. Density invariance is often not appropriate for social networks. We suggest a simple modification based on an offset which instead preserves the mean degree and accommodates changes in network composition asymptotically. We demonstrate that this approach allows ERGMs to be applied to the important situation of egocentrically sampled data. We analyze data from the National Health and Social Life Survey (NHSLS). PMID:21691424

  20. Application of physical parameter identification to finite element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1986-01-01

    A time domain technique for matching response predictions of a structural dynamic model to test measurements is developed. Significance is attached to prior estimates of physical model parameters and to experimental data. The Bayesian estimation procedure allows confidence levels in predicted physical and modal parameters to be obtained. Structural optimization procedures are employed to minimize an error functional with physical model parameters describing the finite element model as design variables. The number of complete FEM analyses are reduced using approximation concepts, including the recently developed convoluted Taylor series approach. The error function is represented in closed form by converting free decay test data to a time series model using Prony' method. The technique is demonstrated on simulated response of a simple truss structure.

  1. Parameter Identification in a Tuberculosis Model for Cameroon

    PubMed Central

    Moualeu-Ngangue, Dany Pascal; Röblitz, Susanna; Ehrig, Rainald; Deuflhard, Peter

    2015-01-01

    A deterministic model of tuberculosis in Cameroon is designed and analyzed with respect to its transmission dynamics. The model includes lack of access to treatment and weak diagnosis capacity as well as both frequency- and density-dependent transmissions. It is shown that the model is mathematically well-posed and epidemiologically reasonable. Solutions are non-negative and bounded whenever the initial values are non-negative. A sensitivity analysis of model parameters is performed and the most sensitive ones are identified by means of a state-of-the-art Gauss-Newton method. In particular, parameters representing the proportion of individuals having access to medical facilities are seen to have a large impact on the dynamics of the disease. The model predicts that a gradual increase of these parameters could significantly reduce the disease burden on the population within the next 15 years. PMID:25874885

  2. Regionalization parameters of conceptual rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Osuch, M.

    2003-04-01

    Main goal of this study was to develop techniques for the a priori estimation parameters of hydrological model. Conceptual hydrological model CLIRUN was applied to around 50 catchment in Poland. The size of catchments range from 1 000 to 100 000 km2. The model was calibrated for a number of gauged catchments with different catchment characteristics. The parameters of model were related to different climatic and physical catchment characteristics (topography, land use, vegetation and soil type). The relationships were tested by comparing observed and simulated runoff series from the gauged catchment that were not used in the calibration. The model performance using regional parameters was promising for most of the calibration and validation catchments.

  3. Parameter Sensitivity Evaluation of the CLM-Crop model

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Zeng, X.; Mametjanov, A.; Anitescu, M.; Norris, B.; Kotamarthi, V. R.

    2011-12-01

    In order to improve carbon cycling within Earth System Models, crop representation for corn, spring wheat, and soybean species has been incorporated into the latest version of the Community Land Model (CLM), the land surface model in the Community Earth System Model. As a means to evaluate and improve the CLM-Crop model, we will determine the sensitivity of various crop parameters on carbon fluxes (such as GPP and NEE), yields, and soil organic matter. The sensitivity analysis will perform small perturbations over a range of values for each parameter on individual grid sites, for comparison with AmeriFlux data, as well as globally so crop model parameters can be improved. Over 20 parameters have been identified for evaluation in this study including carbon-nitrogen ratios for leaves, stems, roots, and organs; fertilizer applications; growing degree days for each growth stage; and more. Results from this study will be presented to give a better understanding of the sensitivity of the various parameters used to represent crops, which will help improve the overall model performance and aid with determining future influences climate change will have on cropland ecosystems.

  4. Evaluating Alternative Risk Adjusters for Medicare.

    PubMed

    Pope, Gregory C; Adamache, Killard W; Walsh, Edith G; Khandker, Rezaul K

    1998-01-01

    In this study the authors use 3 years of the Medicare Current Beneficiary Survey (MCBS) to evaluate alternative demographic, survey, and claims-based risk adjusters for Medicare capitation payment. The survey health-status models have three to four times the predictive power of the demographic models. The risk-adjustment model derived from claims diagnoses has 75-percent greater predictive power than a comprehensive survey model. No single model predicts average expenditures well for all beneficiary subgroups of interest, suggesting a combined model may be appropriate. More data are needed to obtain stable estimates of model parameters. Advantages and disadvantages of alternative risk adjusters are discussed.

  5. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 19 94

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Jacobs, C. S.

    1994-01-01

    This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.

  6. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  7. A spatial model of bird abundance as adjusted for detection probability

    USGS Publications Warehouse

    Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.

    2009-01-01

    Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.

  8. Parameters of cosmological models and recent astronomical observations

    SciTech Connect

    Sharov, G.S.; Vorontsova, E.G. E-mail: elenavor@inbox.ru

    2014-10-01

    For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ΛCDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ΛCDM model 1σ estimates of parameters are: H{sub 0}=70.262±0.319 km {sup -1}Mp {sup -1}, Ω{sub m}=0.276{sub -0.008}{sup +0.009}, Ω{sub Λ}=0.769±0.029, Ω{sub k}=-0.045±0.032. The GCG model under restriction 0α≥ is reduced to the ΛCDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z≥2.3.

  9. Material parameter computation for multi-layered vocal fold models.

    PubMed

    Schmidt, Bastian; Stingl, Michael; Leugering, Günter; Berry, David A; Döllinger, Michael

    2011-04-01

    Today, the prevention and treatment of voice disorders is an ever-increasing health concern. Since many occupations rely on verbal communication, vocal health is necessary just to maintain one's livelihood. Commonly applied models to study vocal fold vibrations and air flow distributions are self sustained physical models of the larynx composed of artificial silicone vocal folds. Choosing appropriate mechanical parameters for these vocal fold models while considering simplifications due to manufacturing restrictions is difficult but crucial for achieving realistic behavior. In the present work, a combination of experimental and numerical approaches to compute material parameters for synthetic vocal fold models is presented. The material parameters are derived from deformation behaviors of excised human larynges. The resulting deformations are used as reference displacements for a tracking functional to be optimized. Material optimization was applied to three-dimensional vocal fold models based on isotropic and transverse-isotropic material laws, considering both a layered model with homogeneous material properties on each layer and an inhomogeneous model. The best results exhibited a transversal-isotropic inhomogeneous (i.e., not producible) model. For the homogeneous model (three layers), the transversal-isotropic material parameters were also computed for each layer yielding deformations similar to the measured human vocal fold deformations.

  10. System for Predicting Pitzer Ion-Interaction Model Parameters

    NASA Astrophysics Data System (ADS)

    Schreiber, D. R.; Obias, T.

    2002-12-01

    Pitzer's Ion-Interaction Model has been widely utilized for the prediction of non-ideal solution behavior. The Pitzer model does an excellent job of predicting the solubility of minerals over a wide range of conditions for natural water systems. While Pitzer's equations have been successful in modeling systems when there are parameters available, there are still some systems that can't be modeled because parameters aren't available for all of the salts of interest. For example, there is little to no data for aluminum salts yet in acidified natural waters it may be present at significant concentrations. In addition, aluminum chemistry will also be important in the remediation of acidified High-level waste. Given the quantity of work involved in generating the needed parameters it would be advantageous to be able to predict Pitzer parameters for salt systems when there is no data available. Recently we began work on modeling High-level waste systems where Pitzer parameters are not available for some of the constituents of interest. We will discuss a set of relations we have developed for the prediction of Pitzer's binary ion-interaction parameters. In the binary parameter case, we reformulated the Pitzer's equations by replacing the parameters, β(0), β(1), β(2), and C, with expressions in ionic radii. Equations have been developed for salts of a particular anion with cations of similar charge. For example, there is a single equation for the 1:1 chloride salts. Relations for acids were developed separately. Also we have developed a separate set of equations for all salts of a particular charge type independent of the anion. While the latter set of equations are of lesser predictive value, they can be used in cases where we don't have a relation for a particular anion. Since any system used to predict parameters would result in a loss of accuracy, experimentally determined parameters should be used when available. The ability of parameters derived from our model

  11. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).

  12. Spatial Variability and Interpolation of Stochastic Weather Simulation Model Parameters.

    NASA Astrophysics Data System (ADS)

    Johnson, Gregory L.; Daly, Christopher; Taylor, George H.; Hanson, Clayton L.

    2000-06-01

    The spatial variability of 58 precipitation and temperature parameters from the `generation of weather elements for multiple applications' (GEM) weather generator has been investigated over a region of significant complexity in topography and climate. GEM parameters were derived for 80 climate stations in southern Idaho and southeastern Oregon. A technique was developed and used to determine the GEM parameters from high-elevation snowpack telemetry stations that report precipitation in nonstandard 2.5-mm (versus 0.25 mm) increments. Important dependencies were noted between most of these parameters and elevation (both domainwide and local), location, and other factors. The `parameter-elevation regressions on independent slopes model' (PRISM) spatial modeling system was used to develop approximate 4-km gridded data fields of each of these parameters. A feature was developed in PRISM that models temperatures above and below mean inversions differently. Examples of the spatial fields derived from this study and a discussion of the applications of these spatial parameter fields are included.

  13. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  14. Parameter uncertainty analysis of a biokinetic model of caesium.

    PubMed

    Li, W B; Klein, W; Blanchardon, E; Puncher, M; Leggett, R W; Oeh, U; Breustedt, B; Noßke, D; Lopez, M A

    2015-01-01

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.

  15. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  16. Optimization of parameters for maximization of plateletpheresis and lymphocytapheresis yields on the Haemonetics Model V50.

    PubMed

    AuBuchon, J P; Carter, C S; Adde, M A; Meyer, D R; Klein, H G

    1986-01-01

    Automated apheresis techniques afford the opportunity of tailoring collection parameters for each donor's hematologic profile. This study investigated the effect of various settings of the volume offset parameter as utilized in the Haemonetics Model V50 instrumentation during platelet- and lymphocytapheresis to optimize product yield, purity, and collection efficiency. In both types of procedures, increased product yield could be obtained by using an increased volume offset for donors having lower hematocrits. This improvement was related to an increase in collection efficiency. Platelet products also contained fewer contaminating lymphocytes with this approach. Adjustment of the volume offset parameter can be utilized to make the most efficient use of donors and provide higher-quality products.

  17. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Fanselow, J. L.

    1987-01-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  18. Resolving model parameter values from carbon and nitrogen stock measurements in a wide range of tropical mature forests using nonlinear inversion and regression trees

    USGS Publications Warehouse

    Liu, S.; Anderson, P.; Zhou, G.; Kauffman, B.; Hughes, F.; Schimel, D.; Watson, Vicente; Tosi, Joseph

    2008-01-01

    Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in seven life zones in Costa Rica. Net primary productivity from the Moderate-Resolution Imaging Spectroradiometer (MODIS), C and N stocks in aboveground live biomass, litter, coarse woody debris (CWD), and in soils were used to calibrate the model. To investigate the resolution of available observations on the number of adjustable parameters, inversion was performed using nine setups of adjustable parameters. Statistics including observation sensitivity, parameter correlation coefficient, parameter sensitivity, and parameter confidence limits were used to evaluate the information content of observations, resolution of model parameters, and overall model performance. Results indicated that soil organic carbon content, soil nitrogen content, and total aboveground biomass carbon had the highest information contents, while measurements of carbon in litter and nitrogen in CWD contributed little to the parameter estimation processes. The available information could resolve the values of 2-4 parameters. Adjusting just one parameter resulted in under-fitting and unacceptable model performance, while adjusting five parameters simultaneously led to over-fitting. Results further indicated that the MODIS NPP values were compressed as compared with the spatial variability of net primary production (NPP) values inferred from inverse modeling. Using inverse modeling to infer NPP and other sensitive model parameters from C and N stock observations provides an opportunity to utilize data collected by national to regional forest inventory systems to reduce the uncertainties in the carbon cycle and generate valuable

  19. Global-scale regionalization of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Miralles, Diego G.; McVicar, Tim R.; Schellekens, Jaap; Bruijnzeel, L. Adrian

    2016-05-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macroscale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10-10,000 km2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the 10 most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially uniform (i.e., averaged calibrated) parameters for 79% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments > 5000 km distant from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV with regionalized parameters outperformed nine state-of-the-art macroscale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via www.gloh2o.org.

  20. Global-scale regionalization of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; van Dijk, Albert; de Roo, Ad; Miralles, Diego; Schellekens, Jaap; McVicar, Tim; Bruijnzeel, Sampurno

    2016-04-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macro-scale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10--10 000~km^2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the ten most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially-uniform (i.e., averaged calibrated) parameters for 79~% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments >5000~km distance from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV using regionalized parameters outperformed nine state-of-the-art macro-scale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via http://water.jrc.ec.europa.eu/HBV/.

  1. Identification of Neurofuzzy models using GTLS parameter estimation.

    PubMed

    Jakubek, Stefan; Hametner, Christoph

    2009-10-01

    In this paper, nonlinear system identification utilizing generalized total least squares (GTLS) methodologies in neurofuzzy systems is addressed. The problem involved with the estimation of the local model parameters of neurofuzzy networks is the presence of noise in measured data. When some or all input channels are subject to noise, the GTLS algorithm yields consistent parameter estimates. In addition to the estimation of the parameters, the main challenge in the design of these local model networks is the determination of the region of validity for the local models. The method presented in this paper is based on an expectation-maximization algorithm that uses a residual from the GTLS parameter estimation for proper partitioning. The performance of the resulting nonlinear model with local parameters estimated by weighted GTLS is a product both of the parameter estimation itself and the associated residual used for the partitioning process. The applicability and benefits of the proposed algorithm are demonstrated by means of illustrative examples and an automotive application. PMID:19336320

  2. Dynamic Factor Analysis Models with Time-Varying Parameters

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian

    2011-01-01

    Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor model…

  3. Separability of Item and Person Parameters in Response Time Models.

    ERIC Educational Resources Information Center

    Van Breukelen, Gerard J. P.

    1997-01-01

    Discusses two forms of separability of item and person parameters in the context of response time models. The first is "separate sufficiency," and the second is "ranking independence." For each form a theorem stating sufficient conditions is proved. The two forms are shown to include several cases of models from psychometric and biometric…

  4. Atmospheric turbulence parameters for modeling wind turbine dynamics

    NASA Technical Reports Server (NTRS)

    Holley, W. E.; Thresher, R. W.

    1982-01-01

    A model which can be used to predict the response of wind turbines to atmospheric turbulence is given. The model was developed using linearized aerodynamics for a three-bladed rotor and accounts for three turbulent velocity components as well as velocity gradients across the rotor disk. Typical response power spectral densities are shown. The system response depends critically on three wind and turbulence parameters, and models are presented to predict desired response statistics. An equation error method, which can be used to estimate the required parameters from field data, is also presented.

  5. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques

  6. Dynamically adjustable foot-ground contact model to estimate ground reaction force during walking and running.

    PubMed

    Jung, Yihwan; Jung, Moonki; Ryu, Jiseon; Yoon, Sukhoon; Park, Sang-Kyoon; Koo, Seungbum

    2016-03-01

    Human dynamic models have been used to estimate joint kinetics during various activities. Kinetics estimation is in demand in sports and clinical applications where data on external forces, such as the ground reaction force (GRF), are not available. The purpose of this study was to estimate the GRF during gait by utilizing distance- and velocity-dependent force models between the foot and ground in an inverse-dynamics-based optimization. Ten males were tested as they walked at four different speeds on a force plate-embedded treadmill system. The full-GRF model whose foot-ground reaction elements were dynamically adjusted according to vertical displacement and anterior-posterior speed between the foot and ground was implemented in a full-body skeletal model. The model estimated the vertical and shear forces of the GRF from body kinematics. The shear-GRF model with dynamically adjustable shear reaction elements according to the input vertical force was also implemented in the foot of a full-body skeletal model. Shear forces of the GRF were estimated from body kinematics, vertical GRF, and center of pressure. The estimated full GRF had the lowest root mean square (RMS) errors at the slow walking speed (1.0m/s) with 4.2, 1.3, and 5.7% BW for anterior-posterior, medial-lateral, and vertical forces, respectively. The estimated shear forces were not significantly different between the full-GRF and shear-GRF models, but the RMS errors of the estimated knee joint kinetics were significantly lower for the shear-GRF model. Providing COP and vertical GRF with sensors, such as an insole-type pressure mat, can help estimate shear forces of the GRF and increase accuracy for estimation of joint kinetics. PMID:26979885

  7. Family support and acceptance, gay male identity formation, and psychological adjustment: a path model.

    PubMed

    Elizur, Y; Ziv, M

    2001-01-01

    While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.

  8. A self-adjusted Monte Carlo simulation as a model for financial markets with central regulation

    NASA Astrophysics Data System (ADS)

    Horváth, Denis; Gmitra, Martin; Kuscsik, Zoltán

    2006-03-01

    Properties of the self-adjusted Monte Carlo algorithm applied to 2d Ising ferromagnet are studied numerically. The endogenous feedback form expressed in terms of the instant running averages is suggested in order to generate a biased random walk of the temperature that converges to criticality without an external tuning. The robustness of a stationary regime with respect to partial accessibility of the information is demonstrated. Several statistical and scaling aspects have been identified which allow to establish an alternative spin lattice model of the financial market. It turns out that our model alike model suggested by Bornholdt [Int. J. Mod. Phys. C 12 (2001) 667], may be described by Lévy-type stationary distribution of feedback variations with unique exponent α1∼3.3. However, the differences reflected by Hurst exponents suggest that resemblances between the studied models seem to be non-trivial.

  9. Bayesian methods for characterizing unknown parameters of material models

    DOE PAGES

    Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.

    2016-02-04

    A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less

  10. Soil-related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2003-07-02

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  11. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    SciTech Connect

    Hansen, Clifford

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  12. Modeling smectic layers in confined geometries: order parameter and defects.

    PubMed

    Pevnyi, Mykhailo Y; Selinger, Jonathan V; Sluckin, Timothy J

    2014-09-01

    We identify problems with the standard complex order parameter formalism for smectic-A (SmA) liquid crystals and discuss possible alternative descriptions of smectic order. In particular, we suggest an approach based on the real smectic density variation rather than a complex order parameter. This approach gives reasonable numerical results for the smectic layer configuration and director field in sample geometries and can be used to model smectic liquid crystals under nanoscale confinement for technological applications.

  13. Parameter selection and testing the soil water model SOIL

    NASA Astrophysics Data System (ADS)

    McGechan, M. B.; Graham, R.; Vinten, A. J. A.; Douglas, J. T.; Hooda, P. S.

    1997-08-01

    The soil water and heat simulation model SOIL was tested for its suitability to study the processes of transport of water in soil. Required parameters, particularly soil hydraulic parameters, were determined by field and laboratory tests for some common soil types and for soils subjected to contrasting treatments of long-term grassland and tilled land under cereal crops. Outputs from simulations were shown to be in reasonable agreement with independently measured field drain outflows and soil water content histories.

  14. Simultaneous estimation of parameters in the bivariate Emax model.

    PubMed

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  15. Inelastic properties of magnetorheological composites: II. Model, identification of parameters

    NASA Astrophysics Data System (ADS)

    Kaleta, Jerzy; Lewandowski, Daniel; Zietek, Grazyna

    2007-10-01

    As a result of a two-part research project the inelastic properties of a selected group of magnetorheological composites in cyclic shear conditions have been identified. In the first part the fabrication of the composites, their structure, the control-measurement setup, the test methods and the experimental results were described. In the second part (presented here), the experimental data are used to construct a constitutive model and identify it. A four-parameter model of an elastic/viscoplastic body was adopted for description. The model coefficients were made dependent on magnetic field strength H. The model was analysed and procedures for its identification were designed. Two-phase identification of the model parameters was carried out. The model has been shown to be valid in a frequency range above 5 Hz.

  16. Application of physical parameter identification to finite-element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1987-01-01

    The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.

  17. Estimation of nonlinear pilot model parameters including time delay.

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.; Wells, W. R.

    1972-01-01

    Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.

  18. Microscopic calculation of interacting boson model parameters by potential-energy surface mapping

    SciTech Connect

    Bentley, I.; Frauendorf, S.

    2011-06-15

    A coherent state technique is used to generate an interacting boson model (IBM) Hamiltonian energy surface which is adjusted to match a mean-field energy surface. This technique allows the calculation of IBM Hamiltonian parameters, prediction of properties of low-lying collective states, as well as the generation of probability distributions of various shapes in the ground state of transitional nuclei, the last two of which are of astrophysical interest. The results for krypton, molybdenum, palladium, cadmium, gadolinium, dysprosium, and erbium nuclei are compared with experiment.

  19. Assimilation of surface data in a one-dimensional physical-biogeochemical model of the surface ocean: 2. Adjusting a simple trophic model to chlorophyll, temperature, nitrate, and pCO{sub 2} data

    SciTech Connect

    Prunet, P.; Minster, J.F.; Echevin, V.

    1996-03-01

    This paper builds on a previous work which produced a constrained physical-biogeochemical model of the carbon cycle in the surface ocean. Three issues are addressed: (1) the results of chlorophyll assimilation using a simpler trophic model, (2) adjustment of parameters using the simpler model and data other than surface chlorophyll concentrations, and (3) consistency of the main carbon fluxes derived by the simplified model with values from the more complex model. A one-dimensional vertical model coupling the physics of the ocean mixed layer and a description of biogeochemical processes with a simple trophic model was used to address these issues. Chlorophyll concentration, nitrate concentration, and temperature were used to constrain the model. The surface chlorophyll information was shown to be sufficient to constrain primary production within the photic layer. The simultaneous assimilation of chlorophyll, nitrate, and temperature resulted in a significant improvement of model simulation for the data used. Of the nine biological and physical parameters which resulted in significant variations of the simulated chlorophyll concentration, seven linear combinations of the mode parameters were constrained. The model fit was an improvement on independent surface chlorophyll and nitrate data. This work indicates that a relatively simple biological model is sufficient to describe carbon fluxes. Assimilation of satellite or climatological data coulc be used to adjust the parameters of the model for three-dimensional models. It also suggests that the main carbon fluxes driving the carbon cycle within surface waters could be derived regionally from surface information. 38 refs., 16 figs., 7 tabs.

  20. Parameter fitting for piano sound synthesis by physical modeling

    NASA Astrophysics Data System (ADS)

    Bensa, Julien; Gipouloux, Olivier; Kronland-Martinet, Richard

    2005-07-01

    A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer-string model. In fact, these parameters are hard to estimate from static measurements, causing the synthesis sounds to be unrealistic. An original approach that estimates the parameters of a piano model, from the measurement of the string vibration, by minimizing a perceptual criterion is proposed. The minimization process that was used is a combination of a gradient method and a simulated annealing algorithm, in order to avoid convergence problems in case of multiple local minima. The criterion, based on the tristimulus concept, takes into account the spectral energy density in three bands, each allowing particular parameters to be estimated. The optimization process has been run on signals measured on an experimental setup. The parameters thus estimated provided a better sound quality than the one obtained using a global energetic criterion. Both the sound's attack and its brightness were better preserved. This quality gain was obtained for parameter values very close to the initial ones, showing that only slight deviations are necessary to make synthetic sounds closer to the real ones.

  1. Advanced parameter retrievals for metamaterial slabs using an inhomogeneous model

    NASA Astrophysics Data System (ADS)

    Li Hou, Ling; Chin, Jessie Yao; Yang, Xin Mi; Lin, Xian Qi; Liu, Ruopeng; Xu, Fu Yong; Cui, Tie Jun

    2008-03-01

    The S-parameter retrieval has proved to be an efficient approach to obtain electromagnetic parameters of metamaterials from reflection and transmission coefficients, where a slab of metamaterial with finite thickness is regarded as a homogeneous medium slab with the same thickness [D. R. Smith and S. Schultz, Phys. Rev. B 65, 195104 (2002)]. However, metamaterial structures composed of subwavelength unit cells are different from homogeneous materials, and the conventional retrieval method is, under certain circumstances, not accurate enough. In this paper, we propose an advanced parameter retrieval method for metamaterial slabs using an inhomogeneous model. Due to the coupling effects of unit cells in a metamaterial slab, the roles of edge and inner cells in the slab are different. Hence, the corresponding equivalent medium parameters are different, which results in the inhomogeneous property of the metamaterial slab. We propose the retrievals of medium parameters for edge and inner cells from S parameters by considering two- and three-cell metamaterial slabs, respectively. Then we set up an inhomogeneous three-layer model for arbitrary metamaterial slabs, which is much more accurate than the conventional homogeneous model. Numerical simulations verify the above conclusions.

  2. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  3. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China. PMID:25055665

  4. Control of the SCOLE configuration using distributed parameter models

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang

    1994-01-01

    A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.

  5. Quantifying the parameters of Prusiner's heterodimer model for prion replication

    NASA Astrophysics Data System (ADS)

    Li, Z. R.; Liu, G. R.; Mi, D.

    2005-02-01

    A novel approach for the determination of parameters in prion replication kinetics is developed based on Prusiner's heterodimer model. It is proposed to employ a simple 2D HP lattice model and a two-state transition theory to determine kinetic parameters that play the key role in the prion replication process. The simulation results reveal the most important facts observed in the prion diseases, including the long incubation time, rapid death following symptom manifestation, the effect of inoculation size, different mechanisms of the familial and infectious prion diseases, etc. Extensive simulation with various thermodynamic parameters shows that the Prusiner's heterodimer model is applicable, and the putative protein X plays a critical role in replication of the prion disease.

  6. Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.

  7. Utilizing Soize's Approach to Identify Parameter and Model Uncertainties

    SciTech Connect

    Bonney, Matthew S.; Brake, Matthew Robert

    2014-10-01

    Quantifying uncertainty in model parameters is a challenging task for analysts. Soize has derived a method that is able to characterize both model and parameter uncertainty independently. This method is explained with the assumption that some experimental data is available, and is divided into seven steps. Monte Carlo analyses are performed to select the optimal dispersion variable to match the experimental data. Along with the nominal approach, an alternative distribution can be used along with corrections that can be utilized to expand the scope of this method. This method is one of a very few methods that can quantify uncertainty in the model form independently of the input parameters. Two examples are provided to illustrate the methodology, and example code is provided in the Appendix.

  8. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  9. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    NASA Astrophysics Data System (ADS)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  10. [Parameter uncertainty analysis for urban rainfall runoff modelling].

    PubMed

    Huang, Jin-Liang; Lin, Jie; Du, Peng-Fei

    2012-07-01

    An urban watershed in Xiamen was selected to perform the parameter uncertainty analysis for urban stormwater runoff modeling in terms of identification and sensitivity analysis based on storm water management model (SWMM) using Monte-Carlo sampling and regionalized sensitivity analysis (RSA) algorithm. Results show that Dstore-Imperv, Dstore-Perv and Curve Number (CN) are the identifiable parameters with larger K-S values in hydrological and hydraulic module, and the rank of K-S values in hydrological and hydraulic module is Dstore-Imperv > CN > Dstore-Perv > N-Perv > conductivity > Con-Mann > N-Imperv. With regards to water quality module, the parameters in exponent washoff model including Coefficient and Exponent and the Max. Buildup parameter of saturation buildup model in three land cover types are the identifiable parameters with the larger K-S values. In comparison, the K-S value of rate constant in three landuse/cover types is smaller than that of Max. Buildup, Coefficient and Exponent.

  11. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  12. Multivariate Risk Adjustment of Primary Care Patient Panels in a Public Health Setting: A Comparison of Statistical Models.

    PubMed

    Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani

    2016-01-01

    We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings.

  13. Multivariate Risk Adjustment of Primary Care Patient Panels in a Public Health Setting: A Comparison of Statistical Models.

    PubMed

    Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani

    2016-01-01

    We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings. PMID:27576054

  14. Validation, Replication, and Sensitivity Testing of Heckman-Type Selection Models to Adjust Estimates of HIV Prevalence

    PubMed Central

    Clark, Samuel J.; Houle, Brian

    2014-01-01

    A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS) found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys. PMID:25402333

  15. Estimation of the parameters of ETAS models by Simulated Annealing.

    PubMed

    Lombardi, Anna Maria

    2015-02-12

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  16. Estimation of the parameters of ETAS models by Simulated Annealing

    PubMed Central

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036

  17. Extrinsic parameter extraction and RF modelling of CMOS

    NASA Astrophysics Data System (ADS)

    Alam, M. S.; Armstrong, G. A.

    2004-05-01

    An analytical approach for CMOS parameter extraction which includes the effect of parasitic resistance is presented. The method is based on small-signal equivalent circuit valid in all region of operation to uniquely extract extrinsic resistances, which can be used to extend the industry standard BSIM3v3 MOSFET model for radio frequency applications. The verification of the model was carried out through frequency domain measurements of S-parameters and direct time domain measurement at 2.4 GHz in a large signal non-linear mode of operation.

  18. Optimization of Parameter Selection for Partial Least Squares Model Development

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Wu, Zhi-Sheng; Zhang, Qiao; Shi, Xin-Yuan; Ma, Qun; Qiao, Yan-Jiang

    2015-07-01

    In multivariate calibration using a spectral dataset, it is difficult to optimize nonsystematic parameters in a quantitative model, i.e., spectral pretreatment, latent factors and variable selection. In this study, we describe a novel and systematic approach that uses a processing trajectory to select three parameters including different spectral pretreatments, variable importance in the projection (VIP) for variable selection and latent factors in the Partial Least-Square (PLS) model. The root mean square errors of calibration (RMSEC), the root mean square errors of prediction (RMSEP), the ratio of standard error of prediction to standard deviation (RPD), and the determination coefficient of calibration (Rcal2) and validation (Rpre2) were simultaneously assessed to optimize the best modeling path. We used three different near-infrared (NIR) datasets, which illustrated that there was more than one modeling path to ensure good modeling. The PLS model optimizes modeling parameters step-by-step, but the robust model described here demonstrates better efficiency than other published papers.

  19. Climate change decision-making: Model & parameter uncertainties explored

    SciTech Connect

    Dowlatabadi, H.; Kandlikar, M.; Linville, C.

    1995-12-31

    A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.

  20. A pressure consistent bridge correction of Kovalenko-Hirata closure in Ornstein-Zernike theory for Lennard-Jones fluids by apparently adjusting sigma parameter

    NASA Astrophysics Data System (ADS)

    Ebato, Yuki; Miyata, Tatsuhiko

    2016-05-01

    Ornstein-Zernike (OZ) integral equation theory is known to overestimate the excess internal energy, Uex, pressure through the virial route, Pv, and excess chemical potential, μex, for one-component Lennard-Jones (LJ) fluids under hypernetted chain (HNC) and Kovalenko-Hirata (KH) approximatons. As one of the bridge correction methods to improve the precision of these thermodynamic quantities, it was shown in our previous paper that the method to apparently adjust σ parameter in the LJ potential is effective [T. Miyata and Y. Ebato, J. Molec. Liquids. 217, 75 (2016)]. In our previous paper, we evaluated the actual variation in the σ parameter by using a fitting procedure to molecular dynamics (MD) results. In this article, we propose an alternative method to determine the actual variation in the σ parameter. The proposed method utilizes a condition that the virial and compressibility pressures coincide with each other. This method can correct OZ theory without a fitting procedure to MD results, and possesses characteristics of keeping a form of HNC and/or KH closure. We calculate the radial distribution function, pressure, excess internal energy, and excess chemical potential for one-component LJ fluids to check the performance of our proposed bridge function. We discuss the precision of these thermodynamic quantities by comparing with MD results. In addition, we also calculate a corrected gas-liquid coexistence curve based on a corrected KH-type closure and compare it with MD results.

  1. Force Field Independent Metal Parameters Using a Nonbonded Dummy Model

    PubMed Central

    2014-01-01

    The cationic dummy atom approach provides a powerful nonbonded description for a range of alkaline-earth and transition-metal centers, capturing both structural and electrostatic effects. In this work we refine existing literature parameters for octahedrally coordinated Mn2+, Zn2+, Mg2+, and Ca2+, as well as providing new parameters for Ni2+, Co2+, and Fe2+. In all the cases, we are able to reproduce both M2+–O distances and experimental solvation free energies, which has not been achieved to date for transition metals using any other model. The parameters have also been tested using two different water models and show consistent performance. Therefore, our parameters are easily transferable to any force field that describes nonbonded interactions using Coulomb and Lennard-Jones potentials. Finally, we demonstrate the stability of our parameters in both the human and Escherichia coli variants of the enzyme glyoxalase I as showcase systems, as both enzymes are active with a range of transition metals. The parameters presented in this work provide a valuable resource for the molecular simulation community, as they extend the range of metal ions that can be studied using classical approaches, while also providing a starting point for subsequent parametrization of new metal centers. PMID:24670003

  2. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  3. Considerations for parameter optimization and sensitivity in climate models.

    PubMed

    Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E

    2010-12-14

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841

  4. Considerations for parameter optimization and sensitivity in climate models

    PubMed Central

    Neelin, J. David; Bracco, Annalisa; Luo, Hao; McWilliams, James C.; Meyerson, Joyce E.

    2010-01-01

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention—here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841

  5. Temporal adaptability and the inverse relationship to sensitivity: a parameter identification model.

    PubMed

    Langley, Keith

    2005-01-01

    Following a prolonged period of visual adaptation to a temporally modulated sinusoidal luminance pattern, the threshold contrast of a similar visual pattern is elevated. The adaptive elevation in threshold contrast is selective for spatial frequency, may saturate at low adaptor contrast, and increases as a function of the spatio-temporal frequency of the adapting signal. A model for signal extraction that is capable of explaining these threshold contrast effects of adaptation is proposed. Contrast adaptation in the model is explained by the identification of the parameters of an environmental model: the autocorrelation function of the visualized signal. The proposed model predicts that the adaptability of threshold contrast is governed by unpredicted signal variations present in the visual signal, and thus represents an internal adjustment by the visual system that takes into account these unpredicted signal variations given the additional possibility for signal corruption by additive noise.

  6. Parameter space for a dissipative Fermi-Ulam model

    NASA Astrophysics Data System (ADS)

    Oliveira, Diego F. M.; Leonel, Edson D.

    2011-12-01

    The parameter space for a dissipative bouncing ball model under the effect of inelastic collisions is studied. The system is described using a two-dimensional nonlinear area-contracting map. The introduction of dissipation destroys the mixed structure of phase space of the non-dissipative case, leading to the existence of a chaotic attractor and attracting fixed points, which may coexist for certain ranges of control parameters. We have computed the average velocity for the parameter space and made a connection with the parameter space based on the maximum Lyapunov exponent. For both cases, we found an infinite family of self-similar structures of shrimp shape, which correspond to the periodic attractors embedded in a large region that corresponds to the chaotic motion.

  7. Important observations and parameters for a salt water intrusion model

    USGS Publications Warehouse

    Shoemaker, W.B.

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.

  8. Important observations and parameters for a salt water intrusion model.

    PubMed

    Shoemaker, W Barclay

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.

  9. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    NASA Astrophysics Data System (ADS)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  10. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    PubMed

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  11. Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series

    NASA Astrophysics Data System (ADS)

    Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik

    2016-06-01

    Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model

  12. Stress and personal resource as predictors of the adjustment of parents to autistic children: a multivariate model.

    PubMed

    Siman-Tov, Ayelet; Kaniel, Shlomo

    2011-07-01

    The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires measuring parental stress, personal resources (sense of coherence, locus of control, social support) adjustment (mental health and marriage quality) and the child's autism symptoms. Path analysis showed that sense of coherence, internal locus of control, social support and quality of marriage increase the ability to cope with the stress of parenting an autistic child. Directions for further research are suggested.

  13. Prediction of interest rate using CKLS model with stochastic parameters

    SciTech Connect

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  14. Nilsson parameters κ and μ in relativistic mean field models

    NASA Astrophysics Data System (ADS)

    Sulaksono, A.; Mart, T.; Bahri, C.

    2005-03-01

    Nilsson parameters κ and μ have been studied in the framework of relativistic mean field (RMF) models. They are used to investigate the reason why RMF models give a relatively good prediction of the spin-orbit splitting but fail to reproduce the placement of the states with different orbital angular momenta. Instead of the relatively small effective mass M*, the independence of M* from the angular momentum l is found to be the reason.

  15. Atmosphere models and the determination of stellar parameters

    NASA Astrophysics Data System (ADS)

    Martins, F.

    2014-11-01

    We present the basic concepts necessary to build atmosphere models for any type of star. We then illustrate how atmosphere models can be used to determine stellar parameters. We focus on the effects of line-blanketing for hot stars, and on non-LTE and three dimensional effects for cool stars. We illustrate the impact of these effects on the determination of the ages of stars from the HR diagram.

  16. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  17. Parabolic problems with parameters arising in evolution model for phytromediation

    NASA Astrophysics Data System (ADS)

    Sahmurova, Aida; Shakhmurov, Veli

    2012-12-01

    The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).

  18. Integrating microbial diversity in soil carbon dynamic models parameters

    NASA Astrophysics Data System (ADS)

    Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie

    2015-04-01

    Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten

  19. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  20. Risk-adjusted capitation funding models for chronic disease in Australia: alternatives to casemix funding.

    PubMed

    Antioch, K M; Walsh, M K

    2002-01-01

    Under Australian casemix funding arrangements that use Diagnosis-Related Groups (DRGs) the average price is policy based, not benchmarked. Cost weights are too low for State-wide chronic disease services. Risk-adjusted Capitation Funding Models (RACFM) are feasible alternatives. A RACFM was developed for public patients with cystic fibrosis treated by an Australian Health Maintenance Organization (AHMO). Adverse selection is of limited concern since patients pay solidarity contributions via Medicare levy with no premium contributions to the AHMO. Sponsors paying premium subsidies are the State of Victoria and the Federal Government. Cost per patient is the dependent variable in the multiple regression. Data on DRG 173 (cystic fibrosis) patients were assessed for heteroskedasticity, multicollinearity, structural stability and functional form. Stepwise linear regression excluded non-significant variables. Significant variables were 'emergency' (1276.9), 'outlier' (6377.1), 'complexity' (3043.5), 'procedures' (317.4) and the constant (4492.7) (R(2)=0.21, SE=3598.3, F=14.39, Prob<0.0001. Regression coefficients represent the additional per patient costs summed to the base payment (constant). The model explained 21% of the variance in cost per patient. The payment rate is adjusted by a best practice annual admission rate per patient. The model is a blended RACFM for in-patient, out-patient, Hospital In The Home, Fee-For-Service Federal payments for drugs and medical services; lump sum lung transplant payments and risk sharing through cost (loss) outlier payments. State and Federally funded home and palliative services are 'carved out'. The model, which has national application via Coordinated Care Trials and by Australian States for RACFMs may be instructive for Germany, which plans to use Australian DRGs for casemix funding. The capitation alternative for chronic disease can improve equity, allocative efficiency and distributional justice. The use of Diagnostic Cost

  1. SU-E-T-247: Multi-Leaf Collimator Model Adjustments Improve Small Field Dosimetry in VMAT Plans

    SciTech Connect

    Young, L; Yang, F

    2014-06-01

    Purpose: The Elekta beam modulator linac employs a 4-mm micro multileaf collimator (MLC) backed by a fixed jaw. Out-of-field dose discrepancies between treatment planning system (TPS) calculations and output water phantom measurements are caused by the 1-mm leaf gap required for all moving MLCs in a VMAT arc. In this study, MLC parameters are optimized to improve TPS out-of-field dose approximations. Methods: Static 2.4 cm square fields were created with a 1-mm leaf gap for MLCs that would normally park behind the jaw. Doses in the open field and leaf gap were measured with an A16 micro ion chamber and EDR2 film for comparison with corresponding point doses in the Pinnacle TPS. The MLC offset table and tip radius were adjusted until TPS point doses agreed with photon measurements. Improvements to the beam models were tested using static arcs consisting of square fields ranging from 1.6 to 14.0 cm, with 45° collimator rotation, and 1-mm leaf gap to replicate VMAT conditions. Gamma values for the 3-mm distance, 3% dose difference criteria were evaluated using standard QA procedures with a cylindrical detector array. Results: The best agreement in point doses within the leaf gap and open field was achieved by offsetting the default rounded leaf end table by 0.1 cm and adjusting the leaf tip radius to 13 cm. Improvements in TPS models for 6 and 10 MV photon beams were more significant for smaller field sizes 3.6 cm or less where the initial gamma factors progressively increased as field size decreased, i.e. for a 1.6cm field size, the Gamma increased from 56.1% to 98.8%. Conclusion: The MLC optimization techniques developed will achieve greater dosimetric accuracy in small field VMAT treatment plans for fixed jaw linear accelerators. Accurate predictions of dose to organs at risk may reduce adverse effects of radiotherapy.

  2. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  3. Parameter Calibration of Mini-LEO Hill Slope Model

    NASA Astrophysics Data System (ADS)

    Siegel, H.

    2015-12-01

    The mini-LEO hill slope, located at Biosphere 2, is a small-scale catchment model that is used to study the ways landscapes change in response to biological, chemical, and hydrological processes. Previous experiments have shown that soil heterogeneity can develop as a result of groundwater flow; changing the characteristics of the landscape. To determine whether or not flow has caused heterogeneity within the mini-LEO hill slope, numerical models were used to simulate the observed seepage flow, water table height, and storativity. To begin a numerical model of the hill slope was created using CATchment Hydrology (CATHY). The model was then brought to an initial steady state by applying a rainfall event of 5mm/day for 180 days. Then a specific rainfall experiment of alternating intensities was applied to the model. Next, a parameter calibration was conducted, to fit the model to the observed data, by changing soil parameters individually. The parameters of the best fitting calibration were taken to be the most representative of those present within the mini-LEO hill slope. Our model concluded that heterogeneities had indeed arisen as a result of the rainfall event, resulting in a lower hydraulic conductivity downslope. The lower hydraulic conductivity downslope in turn caused in an increased storage of water and a decrease in seepage flow compared to homogeneous models. This shows that the hydraulic processes acting within a landscape can change the very characteristics of the landscape itself, namely the permeability and conductivity of the soil. In the future results from the excavation of soil in mini-LEO can be compared to the models results to improve the model and validate its findings.

  4. Electro-optical parameters of bond polarizability model for aluminosilicates.

    PubMed

    Smirnov, Konstantin S; Bougeard, Daniel; Tandon, Poonam

    2006-04-01

    Electro-optical parameters (EOPs) of bond polarizability model (BPM) for aluminosilicate structures were derived from quantum-chemical DFT calculations of molecular models. The tensor of molecular polarizability and the derivatives of the tensor with respect to the bond length are well reproduced with the BPM, and the EOPs obtained are in a fair agreement with available experimental data. The parameters derived were found to be transferable to larger molecules. This finding suggests that the procedure used can be applied to systems with partially ionic chemical bonds. The transferability of the parameters to periodic systems was tested in molecular dynamics simulation of the polarized Raman spectra of alpha-quartz. It appeared that the molecular Si-O bond EOPs failed to reproduce the intensity of peaks in the spectra. This limitation is due to large values of the longitudinal components of the bond polarizability and its derivative found in the molecular calculations as compared to those obtained from periodic DFT calculations of crystalline silica polymorphs by Umari et al. (Phys. Rev. B 2001, 63, 094305). It is supposed that the electric field of the solid is responsible for the difference of the parameters. Nevertheless, the EOPs obtained can be used as an initial set of parameters for calculations of polarizability related characteristics of relevant systems in the framework of BPM.

  5. Comparative Flow Dynamics in Two In Vitro Models of an Adjustable Systemic-Pulmonary Artery Shunt

    NASA Astrophysics Data System (ADS)

    Brown, Tim; Bates, Nathan; Douglas, William; Knapp, Charles; Jacob, Jamey

    2002-11-01

    Systemic-pulmonary artery (SPA) shunts are connections that exist to augment pulmonary blood flow in neonates born with single ventricle physiology. An appropriate balance between the systemic and pulmonary circulations is crucial to their survival. To achieve this, an adjustable SPA shunt is being developed at our institution that consists of a 4 mm PTFE tube with a screw plunger mechanism to achieve the desired change in flow rate by increasing pulmonary resistance. To determine the effect this mechanism has on flow patterns, two in vitro models were created; an idealized model with an axisymmetric constriction and a model developed from flow phantoms of the actual shunt under various actuations. These models were used to measure the instantaneous velocity and vorticity fields using PIV. Recirculation regions downstream of the constriction were observed for both models. For the idealized model, a separation region persisted for approximately 2-5 diameters downstream with a flow range between 600-850 cc/min, corresponding to in vivo conditions and a Re of approximately 1000-1500. In the realistic test sections, shedding vortices were visible 2.5 diameters downstream on the opposing side of the imposed constriction. The flow field structure and wall skin friction of the two cases under various conditions will be discussed.

  6. A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon

    2016-05-01

    We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.

  7. Soil-Related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  8. Telescoping strategies for improved parameter estimation of environmental simulation models

    NASA Astrophysics Data System (ADS)

    Matott, L. Shawn; Hymiak, Beth; Reslink, Camden; Baxter, Christine; Aziz, Shirmin

    2013-10-01

    The parameters of environmental simulation models are often inferred by minimizing differences between simulated output and observed data. Heuristic global search algorithms are a popular choice for performing minimization but many algorithms yield lackluster results when computational budgets are restricted, as is often required in practice. One way for improving performance is to limit the search domain by reducing upper and lower parameter bounds. While such range reduction is typically done prior to optimization, this study examined strategies for contracting parameter bounds during optimization. Numerical experiments evaluated a set of novel “telescoping” strategies that work in conjunction with a given optimizer to scale parameter bounds in accordance with the remaining computational budget. Various telescoping functions were considered, including a linear scaling of the bounds, and four nonlinear scaling functions that more aggressively reduce parameter bounds either early or late in the optimization. Several heuristic optimizers were integrated with the selected telescoping strategies and applied to numerous optimization test functions as well as calibration problems involving four environmental simulation models. The test suite ranged from simple 2-parameter surfaces to complex 100-parameter landscapes, facilitating robust comparisons of the selected optimizers across a variety of restrictive computational budgets. All telescoping strategies generally improved the performance of the selected optimizers, relative to baseline experiments that used no bounds reduction. Performance improvements varied but were as high as 38% for a real-coded genetic algorithm (RGA), 21% for shuffled complex evolution (SCE), 16% for simulated annealing (SA), 8% for particle swarm optimization (PSO), and 7% for dynamically dimensioned search (DDS). Inter-algorithm comparisons suggest that the SCE and DDS algorithms delivered the best overall performance. SCE appears well

  9. Realistic uncertainties on Hapke model parameters from photometric measurement

    NASA Astrophysics Data System (ADS)

    Schmidt, Frédéric; Fernando, Jennifer

    2015-11-01

    The single particle phase function describes the manner in which an average element of a granular material diffuses the light in the angular space usually with two parameters: the asymmetry parameter b describing the width of the scattering lobe and the backscattering fraction c describing the main direction of the scattering lobe. Hapke proposed a convenient and widely used analytical model to describe the spectro-photometry of granular materials. Using a compilation of the published data, Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) recently studied the relationship of b and c for natural examples and proposed the hockey stick relation (excluding b > 0.5 and c > 0.5). For the moment, there is no theoretical explanation for this relationship. One goal of this article is to study a possible bias due to the retrieval method. We expand here an innovative Bayesian inversion method in order to study into detail the uncertainties of retrieved parameters. On Emission Phase Function (EPF) data, we demonstrate that the uncertainties of the retrieved parameters follow the same hockey stick relation, suggesting that this relation is due to the fact that b and c are coupled parameters in the Hapke model instead of a natural phenomena. Nevertheless, the data used in the Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) compilation generally are full Bidirectional Reflectance Diffusion Function (BRDF) that are shown not to be subject to this artifact. Moreover, the Bayesian method is a good tool to test if the sampling geometry is sufficient to constrain the parameters (single scattering albedo, surface roughness, b, c , opposition effect). We performed sensitivity tests by mimicking various surface scattering properties and various single image-like/disk resolved image, EPF-like and BRDF-like geometric sampling conditions. The second goal of this article is to estimate the favorable geometric conditions for an accurate estimation of photometric parameters in order to provide

  10. Mass balance model parameter transferability on a tropical glacier

    NASA Astrophysics Data System (ADS)

    Gurgiser, Wolfgang; Mölg, Thomas; Nicholson, Lindsey; Kaser, Georg

    2013-04-01

    The mass balance and melt water production of glaciers is of particular interest in the Peruvian Andes where glacier melt water has markedly increased water supply during the pronounced dry seasons in recent decades. However, the melt water contribution from glaciers is projected to decrease with appreciable negative impacts on the local society within the coming decades. Understanding mass balance processes on tropical glaciers is a prerequisite for modeling present and future glacier runoff. As a first step towards this aim we applied a process-based surface mass balance model in order to calculate observed ablation at two stakes in the ablation zone of Shallap Glacier (4800 m a.s.l., 9°S) in the Cordillera Blanca, Peru. Under the tropical climate, the snow line migrates very frequently across most of the ablation zone all year round causing large temporal and spatial variations of glacier surface conditions and related ablation. Consequently, pronounced differences between the two chosen stakes and the two years were observed. Hourly records of temperature, humidity, wind speed, short wave incoming radiation, and precipitation are available from an automatic weather station (AWS) on the moraine near the glacier for the hydrological years 2006/07 and 2007/08 while stake readings are available at intervals of between 14 to 64 days. To optimize model parameters, we used 1000 model simulations in which the most sensitive model parameters were varied randomly within their physically meaningful ranges. The modeled surface height change was evaluated against the two stake locations in the lower ablation zone (SH11, 4760m) and in the upper ablation zone (SH22, 4816m), respectively. The optimal parameter set for each point achieved good model skill but if we transfer the best parameter combination from one stake site to the other stake site model errors increases significantly. The same happens if we optimize the model parameters for each year individually and transfer

  11. Estimating demographic parameters using hidden process dynamic models.

    PubMed

    Gimenez, Olivier; Lebreton, Jean-Dominique; Gaillard, Jean-Michel; Choquet, Rémi; Pradel, Roger

    2012-12-01

    Structured population models are widely used in plant and animal demographic studies to assess population dynamics. In matrix population models, populations are described with discrete classes of individuals (age, life history stage or size). To calibrate these models, longitudinal data are collected at the individual level to estimate demographic parameters. However, several sources of uncertainty can complicate parameter estimation, such as imperfect detection of individuals inherent to monitoring in the wild and uncertainty in assigning a state to an individual. Here, we show how recent statistical models can help overcome these issues. We focus on hidden process models that run two time series in parallel, one capturing the dynamics of the true states and the other consisting of observations arising from these underlying possibly unknown states. In a first case study, we illustrate hidden Markov models with an example of how to accommodate state uncertainty using Frequentist theory and maximum likelihood estimation. In a second case study, we illustrate state-space models with an example of how to estimate lifetime reproductive success despite imperfect detection, using a Bayesian framework and Markov Chain Monte Carlo simulation. Hidden process models are a promising tool as they allow population biologists to cope with process variation while simultaneously accounting for observation error. PMID:22373775

  12. Multiple beam interference model for measuring parameters of a capillary.

    PubMed

    Xu, Qiwei; Tian, Wenjing; You, Zhihong; Xiao, Jinghua

    2015-08-01

    A multiple beam interference model based on the ray tracing method and interference theory is built to analyze the interference patterns of a capillary tube filled with a liquid. The relations between the angular widths of the interference fringes and the parameters of both the capillary and liquid are derived. Based on these relations, an approach is proposed to simultaneously determine four parameters of the capillary, i.e., the inner and outer radii of the capillary, the refractive indices of the liquid, and the wall material. PMID:26368114

  13. Inversion of canopy reflectance models for estimation of vegetation parameters

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.

    1987-01-01

    One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.

  14. Multiple beam interference model for measuring parameters of a capillary.

    PubMed

    Xu, Qiwei; Tian, Wenjing; You, Zhihong; Xiao, Jinghua

    2015-08-01

    A multiple beam interference model based on the ray tracing method and interference theory is built to analyze the interference patterns of a capillary tube filled with a liquid. The relations between the angular widths of the interference fringes and the parameters of both the capillary and liquid are derived. Based on these relations, an approach is proposed to simultaneously determine four parameters of the capillary, i.e., the inner and outer radii of the capillary, the refractive indices of the liquid, and the wall material.

  15. Comparison of Cone Model Parameters for Halo Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Na, Hyeonock; Moon, Y.-J.; Jang, Soojeong; Lee, Kyoung-Sun; Kim, Hae-Yeon

    2013-11-01

    Halo coronal mass ejections (HCMEs) are a major cause of geomagnetic storms, hence their three-dimensional structures are important for space weather. We compare three cone models: an elliptical-cone model, an ice-cream-cone model, and an asymmetric-cone model. These models allow us to determine three-dimensional parameters of HCMEs such as radial speed, angular width, and the angle [ γ] between sky plane and cone axis. We compare these parameters obtained from three models using 62 HCMEs observed by SOHO/LASCO from 2001 to 2002. Then we obtain the root-mean-square (RMS) error between the highest measured projection speeds and their calculated projection speeds from the cone models. As a result, we find that the radial speeds obtained from the models are well correlated with one another ( R > 0.8). The correlation coefficients between angular widths range from 0.1 to 0.48 and those between γ-values range from -0.08 to 0.47, which is much smaller than expected. The reason may be the different assumptions and methods. The RMS errors between the highest measured projection speeds and the highest estimated projection speeds of the elliptical-cone model, the ice-cream-cone model, and the asymmetric-cone model are 376 km s-1, 169 km s-1, and 152 km s-1. We obtain the correlation coefficients between the location from the models and the flare location ( R > 0.45). Finally, we discuss strengths and weaknesses of these models in terms of space-weather application.

  16. Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.

    2014-12-01

    This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.

  17. Individualization of the parameters of the three-elements Windkessel model using carotid pulse signal

    NASA Astrophysics Data System (ADS)

    Żyliński, Marek; Niewiadomski, Wiktor; Strasz, Anna; GÄ siorowska, Anna; Berka, Martin; Młyńczak, Marcel; Cybulski, Gerard

    2015-09-01

    The haemodynamics of the arterial system can be described by the three-elements Windkessel model. As it is a lumped model, it does not account for pulse wave propagation phenomena: pulse wave velocity, reflection, and pulse pressure profile changes during propagation. The Modelflowmethod uses this model to calculate stroke volume and total peripheral resistance (TPR) from pulse pressure obtained from finger; the reliability of this method is questioned. The model parameters are: aortic input impedance (Zo), TPR, and arterial compliance (Cw). They were obtained from studies of human aorta preparation. Individual adjustment is performed based on the subject's age and gender. As Cw is also affected by diseases, this may lead to inaccuracies. Moreover, the Modelflowmethod transforms the pulse pressure recording from the finger (Finapres©) into a remarkably different pulse pressure in the aorta using a predetermined transfer function — another source of error. In the present study, we indicate a way to include in the Windkessel model information obtained by adding carotid pulse recording to the finger pressure measurement. This information allows individualization of the values of Cw and Zo. It also seems reasonable to utilize carotid pulse, which better reflects aortic pressure, to individualize the transfer function. Despite its simplicity, the Windkessel model describes essential phenomena in the arterial system remarkably well; therefore, it seems worthwhile to check whether individualization of its parameters would increase the reliability of results obtained with this model.

  18. Revised digestive parameter estimates for the Molly cow model.

    PubMed

    Hanigan, M D; Appuhamy, J A D R N; Gregorini, P

    2013-06-01

    The Molly cow model represents nutrient digestion and metabolism based on a mechanistic representation of the key biological elements. Digestive parameters were derived ad hoc from literature observations or were assumed. Preliminary work determined that several of these parameters did not represent the true relationships. The current work was undertaken to derive ruminal and postruminal digestive parameters and to use a meta-approach to assess the effects of interactions among nutrients and identify areas of model weakness. Model predictions were compared with a database of literature observations containing 233 treatment means. Mean square prediction errors were assessed to characterize model performance. Ruminal pH prediction equations had substantial mean bias, which caused problems in fiber digestion and microbial growth predictions. The pH prediction equation was reparameterized simultaneously with the several ruminal and postruminal digestion parameters, resulting in more realistic parameter estimates for ruminal fiber digestion, and moderate reductions in prediction errors for pH, neutral detergent fiber, acid detergent fiber, and microbial N outflow from the rumen; and postruminal digestion of neutral detergent fiber, acid detergent fiber, and protein. Prediction errors are still large for ruminal ammonia and outflow of starch from the rumen. The gain in microbial efficiency associated with fat feeding was found to be more than twice the original estimate, but in contrast to prior assumptions, fat feeding did not exert negative effects on fiber and protein degradation in the rumen. Microbial responses to ruminal ammonia concentrations were half saturated at 0.2mM versus the original estimate of 1.2mM. Residuals analyses indicated that additional progress could be made in predicting microbial N outflow, volatile fatty acid production and concentrations, and cycling of N between blood and the rumen. These additional corrections should lead to an even more

  19. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  20. Nonconservative force model parameter estimation strategy for TOPEX/Poseidon precision orbit determination

    NASA Astrophysics Data System (ADS)

    Luthcke, S. B.; Marshall, J. A.

    1992-11-01

    The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.

  1. Nonconservative force model parameter estimation strategy for TOPEX/Poseidon precision orbit determination

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Marshall, J. A.

    1992-01-01

    The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.

  2. Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1996-01-01

    Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.

  3. A data-driven model of present-day glacial isostatic adjustment in North America

    NASA Astrophysics Data System (ADS)

    Simon, Karen; Riva, Riccardo

    2016-04-01

    Geodetic measurements of gravity change and vertical land motion are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) via least-squares inversion. The result is an updated model of present-day GIA wherein the final predicted signal is informed by both observational data with realistic errors, and prior knowledge of GIA inferred from forward models. This method and other similar techniques have been implemented within a limited but growing number of GIA studies (e.g., Hill et al. 2010). The combination method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. Here, we show the results of using the combination approach to predict present-day rates of GIA in North America through the incorporation of both GPS-measured vertical land motion rates and GRACE-measured gravity observations into the prior model. In order to assess the influence of each dataset on the final GIA prediction, the vertical motion and gravimetry datasets are incorporated into the model first independently (i.e., one dataset only), then simultaneously. Because the a priori GIA model and its associated covariance are developed by averaging predictions from a suite of forward models that varies aspects of the Earth rheology and ice sheet history, the final GIA model is not independent of forward model predictions. However, we determine the sensitivity of the final model result to the prior GIA model information by using different representations of the input model covariance. We show that when both datasets are incorporated into the inversion, the final model adequately predicts available observational constraints, minimizes the uncertainty associated with the forward modelled GIA inputs, and includes a realistic estimation of the formal error associated with the GIA process. Along parts of the North American coastline, improved predictions of the long-term (kyr

  4. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  5. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data.

  6. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data. PMID:25460087

  7. The use of satellites in gravity field determination and model adjustment

    NASA Astrophysics Data System (ADS)

    Visser, Petrus Nicolaas Anna Maria

    1992-06-01

    Methods to improve gravity field models of the Earth with available data from satellite observations are proposed and discussed. In principle, all types of satellite observations mentioned give information of the satellite orbit perturbations and in conjunction the Earth's gravity field, because the satellite orbits are affected most by the Earth's gravity field. Therefore, two subjects are addressed: representation forms of the gravity field of the Earth and the theory of satellite orbit perturbations. An analytical orbit perturbation theory is presented and shown to be sufficiently accurate for describing satellite orbit perturbations if certain conditions are fulfilled. Gravity field adjustment experiments using the analytical orbit perturbation theory are discussed using real satellite observations. These observations consisted of Seasat laser range measurements and crossover differences, and of Geosat altimeter measurements and crossover differences. A look into the future, particularly relating to the ARISTOTELES (Applications and Research Involving Space Techniques for the Observation of the Earth's field from Low Earth Orbit Spacecraft) mission, is given.

  8. Important observations and parameters for a salt water intrusion model.

    PubMed

    Shoemaker, W Barclay

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration. PMID:15584297

  9. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  10. UPDATING THE FREIGHT TRUCK STOCK ADJUSTMENT MODEL: 1997 VEHICLE INVENTORY AND USE SURVEY DATA

    SciTech Connect

    Davis, S.C.

    2000-11-16

    The Energy Information Administration's (EIA's) National Energy Modeling System (NEMS) Freight Truck Stock Adjustment Model (FTSAM) was created in 1995 relying heavily on input data from the 1992 Economic Census, Truck Inventory and Use Survey (TIUS). The FTSAM is part of the NEMS Transportation Sector Model, which provides baseline energy projections and analyzes the impacts of various technology scenarios on consumption, efficiency, and carbon emissions. The base data for the FTSAM can be updated every five years as new Economic Census information is released. Because of expertise in using the TIUS database, Oak Ridge National Laboratory (ORNL) was asked to assist the EIA when the new Economic Census data were available. ORNL provided the necessary base data from the 1997 Vehicle Inventory and Use Survey (VIUS) and other sources to update the FTSAM. The next Economic Census will be in the year 2002. When those data become available, the EIA will again want to update the FTSAM using the VIUS. This report, which details the methodology of estimating and extracting data from the 1997 VIUS Microdata File, should be used as a guide for generating the data from the next VIUS so that the new data will be as compatible as possible with the data in the model.

  11. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866

  12. Empirical flow parameters : a tool for hydraulic model validity

    USGS Publications Warehouse

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  13. Numerical model for thermal parameters in optical materials

    NASA Astrophysics Data System (ADS)

    Sato, Yoichi; Taira, Takunori

    2016-04-01

    Thermal parameters of optical materials, such as thermal conductivity, thermal expansion, temperature coefficient of refractive index play a decisive role for the thermal design inside laser cavities. Therefore, numerical value of them with temperature dependence is quite important in order to develop the high intense laser oscillator in which optical materials generate excessive heat across mode volumes both of lasing output and optical pumping. We already proposed a novel model of thermal conductivity in various optical materials. Thermal conductivity is a product of isovolumic specific heat and thermal diffusivity, and independent modeling of these two figures should be required from the viewpoint of a clarification of physical meaning. Our numerical model for thermal conductivity requires one material parameter for specific heat and two parameters for thermal diffusivity in the calculation of each optical material. In this work we report thermal conductivities of various optical materials as Y3Al5O12 (YAG), YVO4 (YVO), GdVO4 (GVO), stoichiometric and congruent LiTaO3, synthetic quartz, YAG ceramics and Y2O3 ceramics. The dependence on Nd3+-doping in laser gain media in YAG, YVO and GVO is also studied. This dependence can be described by only additional three parameters. Temperature dependence of thermal expansion and temperature coefficient of refractive index for YAG, YVO, and GVO: these are also included in this work for convenience. We think our numerical model is quite useful for not only thermal analysis in laser cavities or optical waveguides but also the evaluation of physical properties in various transparent materials.

  14. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  15. The Trauma Outcome Process Assessment Model: A Structural Equation Model Examination of Adjustment

    ERIC Educational Resources Information Center

    Borja, Susan E.; Callahan, Jennifer L.

    2009-01-01

    This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…

  16. Modelling spatial-temporal and coordinative parameters in swimming.

    PubMed

    Seifert, L; Chollet, D

    2009-07-01

    This study modelled the changes in spatial-temporal and coordinative parameters through race paces in the four swimming strokes. The arm and leg phases in simultaneous strokes (butterfly and breaststroke) and the inter-arm phases in alternating strokes (crawl and backstroke) were identified by video analysis to calculate the time gaps between propulsive phases. The relationships among velocity, stroke rate, stroke length and coordination were modelled by polynomial regression. Twelve elite male swimmers swam at four race paces. Quadratic regression modelled the changes in spatial-temporal and coordinative parameters with velocity increases for all four strokes. First, the quadratic regression between coordination and velocity showed changes common to all four strokes. Notably, the time gaps between the key points defining the beginning and end of the stroke phases decreased with increases in velocity, which led to decreases in glide times and increases in the continuity between propulsive phases. Conjointly, the quadratic regression among stroke rate, stroke length and velocity was similar to the changes in coordination, suggesting that these parameters may influence coordination. The main practical application for coaches and scientists is that ineffective time gaps can be distinguished from those that simply reflect an individual swimmer's profile by monitoring the glide times within a stroke cycle. In the case of ineffective time gaps, targeted training could improve the swimmer's management of glide time. PMID:18547862

  17. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    PubMed

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting. PMID:9306648

  18. Nonlinear relative-proportion-based route adjustment process for day-to-day traffic dynamics: modeling, equilibrium and stability analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng

    2016-11-01

    Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.

  19. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    PubMed

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.

  20. Modeling and Extraction of Parasitic Thermal Conductance and Intrinsic Model Parameters of Thermoelectric Modules

    NASA Astrophysics Data System (ADS)

    Sim, Minseob; Park, Hyunbin; Kim, Shiho

    2015-11-01

    We have presented both modeling and a method for extracting parasitic thermal conductance as well as intrinsic device parameters of a thermoelectric module based on information readily available in vendor datasheets. An equivalent circuit model that is compatible with circuit simulators is derived, followed by a methodology for extracting both intrinsic and parasitic model parameters. For the first time, the effective thermal resistance of the ceramic and copper interconnect layers of the thermoelectric module is extracted using only parameters listed in vendor datasheets. In the experimental condition, including under condition of varying electric current, the parameters extracted from the model accurately reproduce the performance of commercial thermoelectric modules.

  1. Nonlocal Order Parameters for the 1D Hubbard Model

    NASA Astrophysics Data System (ADS)

    Montorsi, Arianna; Roncaglia, Marco

    2012-12-01

    We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point Uc=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at Uc. The behavior of the parity correlators is captured by an effective free spinless fermion model.

  2. A new glacial isostatic adjustment model of the Innuitian Ice Sheet, Arctic Canada

    NASA Astrophysics Data System (ADS)

    Simon, K. M.; James, T. S.; Dyke, A. S.

    2015-07-01

    A reconstruction of the Innuitian Ice Sheet (IIS) is developed that incorporates first-order constraints on its spatial extent and history as suggested by regional glacial geology studies. Glacial isostatic adjustment modelling of this ice sheet provides relative sea-level predictions that are in good agreement with measurements of post-glacial sea-level change at 18 locations. The results indicate peak thicknesses of the Innuitian Ice Sheet of approximately 1600 m, up to 400 m thicker than the minimum peak thicknesses estimated from glacial geology studies, but between approximately 1000 to 1500 m thinner than the peak thicknesses present in previous GIA models. The thickness history of the best-fit Innuitian Ice Sheet model developed here, termed SJD15, differs from the ICE-5G reconstruction and provides an improved fit to sea-level measurements from the lowland sector of the ice sheet. Both models provide a similar fit to relative sea-level measurements from the alpine sector. The vertical crustal motion predictions of the best-fit IIS model are in general agreement with limited GPS observations, after correction for a significant elastic crustal response to present-day ice mass change. The new model provides approximately 2.7 m equivalent contribution to global sea-level rise, an increase of +0.6 m compared to the Innuitian portion of ICE-5G. SJD15 is qualitatively more similar to the recent ICE-6G ice sheet reconstruction, which appears to also include more spatially extensive ice cover in the Innuitian region than ICE-5G.

  3. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  4. Accelerated gravitational wave parameter estimation with reduced order modeling.

    PubMed

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable. PMID:25763948

  5. Order-parameter model for unstable multilane traffic flow

    NASA Astrophysics Data System (ADS)

    Lubashevsky, Ihor A.; Mahnke, Reinhard

    2000-11-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the ``free flow <--> synchronized mode <--> jam'' phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the ``many-body'' effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the ``one-particle'' distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of ``fast'' drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the ``free flow <--> synchronized motion'' phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.

  6. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  7. How many parameters does a quark mass matrix model need

    SciTech Connect

    Koide, Y. )

    1990-11-01

    An investigation independent of matrix form is made of how many parameters, which characterize the difference between up- and down-quark mass matrices, are, at least, required from the present data on quark masses and mixings. From a general study of the model with hierarchical three-step mass generations described by the three parameters {alpha}{sub {ital q}}, {beta}{sub {ital q}}, and {gamma}{sub {ital q}} ({vert bar}{alpha}{sub {ital q}}{vert bar}{much gt}{vert bar}{beta}{sub {ital q}}{vert bar}{much gt}{vert bar}{gamma}{sub {ital q}}{vert bar}; {ital q}={ital u},{ital d}), it is pointed out that the model with {beta}{sub {ital u}}/{beta}{sub {ital d}}={gamma}{sub {ital u}}/{gamma}{sub {ital d}} (i.e., with two independent parameters {alpha}{sub {ital q}} and {beta}{sub {ital q}}) is ruled out.

  8. [Temperature dependence of parameters of plant photosynthesis models: a review].

    PubMed

    Borjigidai, Almaz; Yu, Gui-Rui

    2013-12-01

    This paper reviewed the progress on the temperature response models of plant photosynthesis. Mechanisms involved in changes in the photosynthesis-temperature curve were discussed based on four parameters, intercellular CO2 concentration, activation energy of the maximum rate of RuBP (ribulose-1,5-bisphosphate) carboxylation (V (c max)), activation energy of the rate of RuBP regeneration (J(max)), and the ratio of J(max) to V(c max) All species increased the activation energy of V(c max) with increasing growth temperature, while other parameters changed but differed among species, suggesting the activation energy of V(c max) might be the most important parameter for the temperature response of plant photosynthesis. In addition, research problems and prospects were proposed. It's necessary to combine the photosynthesis models at foliage and community levels, and to investigate the mechanism of plants in response to global change from aspects of leaf area, solar radiation, canopy structure, canopy microclimate and photosynthetic capacity. It would benefit the understanding and quantitative assessment of plant growth, carbon balance of communities and primary productivity of ecosystems.

  9. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    NASA Astrophysics Data System (ADS)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  10. A method of estimating optimal catchment model parameters

    NASA Astrophysics Data System (ADS)

    Ibrahim, Yaacob; Liong, Shie-Yui

    1993-09-01

    A review of a calibration method developed earlier (Ibrahim and Liong, 1992) is presented. The method generates optimal values for single events. It entails randomizing the calibration parameters over bounds such that a system response under consideration is bounded. Within the bounds, which are narrow and generated automatically, explicit response surface representation of the response is obtained using experimental design techniques and regression analysis. The optimal values are obtained by searching on the response surface for a point at which the predicted response is equal to the measured response and the value of the joint probability density function at that point in a transformed space is the highest. The method is demonstrated on a catchment in Singapore. The issue of global optimal values is addressed by applying the method on wider bounds. The results indicate that the optimal values arising from the narrow set of bounds are, indeed, global. Improvements which are designed to achieve comparably accurate estimates but with less expense are introduced. A linear response surface model is used. Two approximations of the model are studied. The first is to fit the model using data points generated from simple Monte Carlo simulation; the second is to approximate the model by a Taylor series expansion. Very good results are obtained from both approximations. Two methods of obtaining a single estimate from the individual event's estimates of the parameters are presented. The simulated and measured hydrographs of four verification storms using these estimates compare quite well.

  11. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  12. Optimal vibration control of curved beams using distributed parameter models

    NASA Astrophysics Data System (ADS)

    Liu, Fushou; Jin, Dongping; Wen, Hao

    2016-12-01

    The design of linear quadratic optimal controller using spectral factorization method is studied for vibration suppression of curved beam structures modeled as distributed parameter models. The equations of motion for active control of the in-plane vibration of a curved beam are developed firstly considering its shear deformation and rotary inertia, and then the state space model of the curved beam is established directly using the partial differential equations of motion. The functional gains for the distributed parameter model of curved beam are calculated by extending the spectral factorization method. Moreover, the response of the closed-loop control system is derived explicitly in frequency domain. Finally, the suppression of the vibration at the free end of a cantilevered curved beam by point control moment is studied through numerical case studies, in which the benefit of the presented method is shown by comparison with a constant gain velocity feedback control law, and the performance of the presented method on avoidance of control spillover is demonstrated.

  13. A modified inverse procedure for calibrating parameters in a land subsidence model and its field application in Shanghai, China

    NASA Astrophysics Data System (ADS)

    Luo, Yue; Ye, Shujun; Wu, Jichun; Wang, Hanmei; Jiao, Xun

    2016-05-01

    Land-subsidence prediction depends on an appropriate subsidence model and the calibration of its parameter values. A modified inverse procedure is developed and applied to calibrate five parameters in a compacting confined aquifer system using records of field data from vertical extensometers and corresponding hydrographs. The inverse procedure of COMPAC (InvCOMPAC) has been used in the past for calibrating vertical hydraulic conductivity of the aquitards, nonrecoverable and recoverable skeletal specific storages of the aquitards, skeletal specific storage of the aquifers, and initial preconsolidation stress within the aquitards. InvCOMPAC is modified to increase robustness in this study. There are two main differences in the modified InvCOMPAC model (MInvCOMPAC). One is that field data are smoothed before diagram analysis to reduce local oscillation of data and remove abnormal data points. A robust locally weighted regression method is applied to smooth the field data. The other difference is that the Newton-Raphson method, with a variable scale factor, is used to conduct the computer-based inverse adjustment procedure. MInvCOMPAC is then applied to calibrate parameters in a land subsidence model of Shanghai, China. Five parameters of aquifers and aquitards at 15 multiple-extensometer sites are calibrated. Vertical deformation of sedimentary layers can be predicted by the one-dimensional COMPAC model with these calibrated parameters at extensometer sites. These calibrated parameters could also serve as good initial values for parameters of three-dimensional regional land subsidence models of Shanghai.

  14. Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics

    SciTech Connect

    Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu

    2012-01-01

    While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.

  15. Multi-criteria parameter estimation for the Unified Land Model

    NASA Astrophysics Data System (ADS)

    Livneh, B.; Lettenmaier, D. P.

    2012-08-01

    We describe a parameter estimation framework for the Unified Land Model (ULM) that utilizes multiple independent data sets over the continental United States. These include a satellite-based evapotranspiration (ET) product based on MODerate resolution Imaging Spectroradiometer (MODIS) and Geostationary Operational Environmental Satellites (GOES) imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR) atmospheric fields, terrestrial water storage content (TWSC) data from the Gravity Recovery and Climate Experiment (GRACE), and streamflow (Q) primarily from the United States Geological Survey (USGS) stream gauges. The study domain includes 10 large-scale (≥105 km2) river basins and 250 smaller-scale (<104 km2) tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting Model, is the basis for these experiments. Calibrations were made using each of the data sets individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large scales, calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET) suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under) estimation of low (high) flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  16. Multi-criteria parameter estimation for the unified land model

    NASA Astrophysics Data System (ADS)

    Livneh, B.; Lettenmaier, D. P.

    2012-04-01

    We describe a parameter estimation framework for the Unified Land Model (ULM) that utilizes multiple independent data sets over the Continental United States. These include a satellite-based evapotranspiration (ET) product based on MODerate resolution Imaging Spectroradiometer (MODIS) and Geostationary Operation Environmental Satellites (GOES) imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR) atmospheric fields, terrestrial water storage content (TWSC) data from the Gravity Recovery and Climate Experiment (GRACE), and streamflow (Q) primarily from the United States Geological Survey (USGS) stream gauges. The study domain includes 10 large-scale (≥105 km2) river basins and 250 smaller-scale (<104 km2) tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting model, is the basis for these experiments. Calibrations were made using each of the criteria individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large-scales calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET) suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under) estimation of low (high) flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  17. Sensitivity Analysis of Parameters in Linear-Quadratic Radiobiologic Modeling

    SciTech Connect

    Fowler, Jack F.

    2009-04-01

    Purpose: Radiobiologic modeling is increasingly used to estimate the effects of altered treatment plans, especially for dose escalation. The present article shows how much the linear-quadratic (LQ) (calculated biologically equivalent dose [BED] varies when individual parameters of the LQ formula are varied by {+-}20% and by 1%. Methods: Equivalent total doses (EQD2 = normalized total doses (NTD) in 2-Gy fractions for tumor control, acute mucosal reactions, and late complications were calculated using the linear- quadratic formula with overall time: BED = nd (1 + d/ [{alpha}/{beta}]) - log{sub e}2 (T - Tk) / {alpha}Tp, where BED is BED = total dose x relative effectiveness (RE = nd (1 + d/ [{alpha}/{beta}]). Each of the five biologic parameters in turn was altered by {+-}10%, and the altered EQD2s tabulated; the difference was finally divided by 20. EQD2 or NTD is obtained by dividing BED by the RE for 2-Gy fractions, using the appropriate {alpha}/{beta} ratio. Results: Variations in tumor and acute mucosal EQD ranged from 0.1% to 0.45% per 1% change in each parameter for conventional schedules, the largest variation being caused by overall time. Variations in 'late' EQD were 0.4% to 0.6% per 1% change in the only biologic parameter, the {alpha}/{beta} ratio. For stereotactic body radiotherapy schedules, variations were larger, up to 0.6 to 0.9 for tumor and 1.6% to 1.9% for late, per 1% change in parameter. Conclusions: Robustness occurs similar to that of equivalent uniform dose (EUD), for the same reasons. Total dose, dose per fraction, and dose-rate cause their major effects, as well known.

  18. Simulation-based parameter estimation for complex models: a breast cancer natural history modelling illustration.

    PubMed

    Chia, Yen Lin; Salzman, Peter; Plevritis, Sylvia K; Glynn, Peter W

    2004-12-01

    Simulation-based parameter estimation offers a powerful means of estimating parameters in complex stochastic models. We illustrate the application of these ideas in the setting of a natural history model for breast cancer. Our model assumes that the tumor growth process follows a geometric Brownian motion; parameters are estimated from the SEER registry. Our discussion focuses on the use of simulation for computing the maximum likelihood estimator for this class of models. The analysis shows that simulation provides a straightforward means of computing such estimators for models of substantial complexity.

  19. Parameter uncertainty in biochemical models described by ordinary differential equations.

    PubMed

    Vanlier, J; Tiemann, C A; Hilbers, P A J; van Riel, N A W

    2013-12-01

    Improved mechanistic understanding of biochemical networks is one of the driving ambitions of Systems Biology. Computational modeling allows the integration of various sources of experimental data in order to put this conceptual understanding to the test in a quantitative manner. The aim of computational modeling is to obtain both predictive as well as explanatory models for complex phenomena, hereby providing useful approximations of reality with varying levels of detail. As the complexity required to describe different system increases, so does the need for determining how well such predictions can be made. Despite efforts to make tools for uncertainty analysis available to the field, these methods have not yet found widespread use in the field of Systems Biology. Additionally, the suitability of the different methods strongly depends on the problem and system under investigation. This review provides an introduction to some of the techniques available as well as gives an overview of the state-of-the-art methods for parameter uncertainty analysis.

  20. Modelling of some parameters from thermoelectric power plants

    NASA Astrophysics Data System (ADS)

    Popa, G. N.; Diniş, C. M.; Deaconu, S. I.; Maksay, Şt; Popa, I.

    2016-02-01

    Paper proposing new mathematical models for the main electrical parameters (active power P, reactive power Q of power supplies) and technological (mass flow rate of steam M from boiler and dust emission E from the output of precipitator) from a thermoelectric power plants using industrial plate-type electrostatic precipitators with three sections used in electrical power plants. The mathematical models were used experimental results taken from industrial facility, from boiler and plate-type electrostatic precipitators with three sections, and has used the least squares method for their determination. The modelling has been used equations of degree 1, 2 and 3. The equations were determined between dust emission depending on active power of power supplies and mass flow rate of steam from boiler, and, also, depending on reactive power of power supplies and mass flow rate of steam from boiler. These equations can be used to control the process from electrostatic precipitators.

  1. Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.

  2. Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Brar, Nachhatter; Joshi, Vasant

    2011-06-01

    Simulation of low velocity impact on structures or high velocity penetration in armor materials heavily rely on constitutive material models. The model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH) to accurately simulate fragment impact on structural components made of high strength 7075-T651 aluminum alloys. Johnson-Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration into 1' thick Al-7075-T651plates. When simulations go well beyond minor parameter tweaking and experimental results are drastically different it is important to determine constitutive parameters from the actual material used in impact/penetration experiments. To investigate anisotropic effects on the yield/flow stress of this alloy we performed quasi-static and high strain rate tensile tests on specimens fabricated in the longitudinal, transverse, and thickness directions of 1' thick Al7075-T651 plate. Flow stresses at a strain rate of ~1100/s in the longitudinal and transverse direction are similar around 670MPa and decreases to 620 MPa in the thickness direction. These data are lower than the flow stress of 760 MPa measured in Al7075-T651 bar stock.

  3. Joint Alignment of Underwater and Above-The Photogrammetric 3d Models by Independent Models Adjustment

    NASA Astrophysics Data System (ADS)

    Menna, F.; Nocerino, E.; Troisi, S.; Remondino, F.

    2015-04-01

    The surveying and 3D modelling of objects that extend both below and above the water level, such as ships, harbour structures, offshore platforms, are still an open issue. Commonly, a combined and simultaneous survey is the adopted solution, with acoustic/optical sensors respectively in underwater and in air (most common) or optical/optical sensors both below and above the water level. In both cases, the system must be calibrated and a ship is to be used and properly equipped with also a navigation system for the alignment of sequential 3D point clouds. Such a system is usually highly expensive and has been proved to work with still structures. On the other hand for free floating objects it does not provide a very practical solution. In this contribution, a flexible, low-cost alternative for surveying floating objects is presented. The method is essentially based on photogrammetry, employed for surveying and modelling both the emerged and submerged parts of the object. Special targets, named Orientation Devices, are specifically designed and adopted for the successive alignment of the two photogrammetric models (underwater and in air). A typical scenario where the proposed procedure can be particularly suitable and effective is the case of a ship after an accident whose damaged part is underwater and necessitate to be measured (Figure 1). The details of the mathematical procedure are provided in the paper, together with a critical explanation of the results obtained from the adoption of the method for the survey of a small pleasure boat in floating condition.

  4. Microbial Communities Model Parameter Calculation for TSPA/SR

    SciTech Connect

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.

  5. Parameter optimization in differential geometry based solvation models

    PubMed Central

    Wang, Bao; Wei, G. W.

    2015-01-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304

  6. Parameter optimization in differential geometry based solvation models.

    PubMed

    Wang, Bao; Wei, G W

    2015-10-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules.

  7. Bayesian analysis of inflation: Parameter estimation for single field models

    SciTech Connect

    Mortonson, Michael J.; Peiris, Hiranya V.; Easther, Richard

    2011-02-15

    Future astrophysical data sets promise to strengthen constraints on models of inflation, and extracting these constraints requires methods and tools commensurate with the quality of the data. In this paper we describe ModeCode, a new, publicly available code that computes the primordial scalar and tensor power spectra for single-field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. ModeCode is easily extendable to additional models of inflation, and future updates will include Bayesian model comparison. Errors from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments. We constrain representative single-field models ({phi}{sup n} with n=2/3, 1, 2, and 4, natural inflation, and 'hilltop' inflation) using current data, and provide forecasts for Planck. From current data, we obtain weak but nontrivial limits on the post-inflationary physics, which is a significant source of uncertainty in the predictions of inflationary models, while we find that Planck will dramatically improve these constraints. In particular, Planck will link the inflationary dynamics with the post-inflationary growth of the horizon, and thus begin to probe the ''primordial dark ages'' between TeV and grand unified theory scale energies.

  8. Expanding the model: anisotropic displacement parameters in protein structure refinement.

    PubMed

    Merritt, E A

    1999-06-01

    Recent technological improvements in crystallographic data collection have led to a surge in the number of protein structures being determined at atomic or near-atomic resolution. At this resolution, structural models can be expanded to include anisotropic displacement parameters (ADPs) for individual atoms. New protocols and new tools are needed to refine, analyze and validate such models optimally. One such tool, PARVATI, has been used to examine all protein structures (peptide chains >50 residues) for which expanded models including ADPs are available from the Protein Data Bank. The distribution of anisotropy within each of these refined models is broadly similar across the entire set of structures, with a mean anisotropy A in the range 0.4-0.5. This is a significant departure from a purely isotropic model and explains why the inclusion of ADPs yields a substantial improvement in the crystallographic residuals R and Rfree. The observed distribution of anisotropy may prove useful in the validation of very high resolution structures. A more complete understanding of this distribution may also allow the development of improved protein structural models, even at lower resolution.

  9. Parameter uncertainty analysis for an EMIC and a terrestrial vegetation model

    NASA Astrophysics Data System (ADS)

    Tachiiri, K.; Hargreaves, J. C.; Annan, J. D.; Oka, A.; Ito, A.; Kawamiya, M.

    2009-04-01

    For quantitative discussions on the reliability of the modeled future climate, the sensitivity of the model's outputs to possible perturbations is needed to be examined carefully. However, as it is not realistic for state-of-art GCMs to carry out long ensemble runs for a large number of members, a reasonable substitution for such analyses will be to use Earth system models of intermediate complexity (EMICs). MIROC-lite, an EMIC used in this study, was originally developed in 2001 based on MIROC (an GCM) and is consists of an ocean GCM and a 2D energy moisture balance model for atmosphere. Using this model, we carried out an ensemble experiment perturbing 14 parameters (suggested by the model developer) at once with 300 members for which parameter sets were generated by the Latin hypercube for all parameters to have flat distributions in the predetermined ranges. After 3,000 year run to obtain quasi-equilibrium states, the average of air temperature, specific humidity, ocean temperature and ocean salinity in the last 100 years was compared to NCEP/NCAR reanalysis or World Ocean Atlas observation data. Consequently, it was found that among the parameters perturbed heat diffusivity plays the most significant role in deciding the pattern of the variables examined in this analysis, while the amount of the freshwater flux adjustment plays a comparable role for ocean salinity. In the same manner, to investigate the uncertainty in terrestrial vegetation models, multi-parameter ensemble experiments perturbing nine parameters, suggested by the model developer, which control photosynthesis and soil decomposition were conducted for two models; Sim-CYCLE (Simulation model of Carbon Cycle in Land Ecosystems) and its successor VISIT (Vegetation Integrative Simulator for Trace gasses). In this experiment having 300 members, again generated by Latin hypercube, as the perturbation coefficients of 0.5-1.5 were multiplied to the default values, and it was revealed that a larger

  10. Important Scaling Parameters for Testing Model-Scale Helicopter Rotors

    NASA Technical Reports Server (NTRS)

    Singleton, Jeffrey D.; Yeager, William T., Jr.

    1998-01-01

    An investigation into the effects of aerodynamic and aeroelastic scaling parameters on model scale helicopter rotors has been conducted in the NASA Langley Transonic Dynamics Tunnel. The effect of varying Reynolds number, blade Lock number, and structural elasticity on rotor performance has been studied and the performance results are discussed herein for two different rotor blade sets at two rotor advance ratios. One set of rotor blades were rigid and the other set of blades were dynamically scaled to be representative of a main rotor design for a utility class helicopter. The investigation was con-densities permits the acquisition of data for several Reynolds and Lock number combinations.

  11. An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model

    NASA Astrophysics Data System (ADS)

    Purcell, A.; Tregoning, P.; Dehecq, A.

    2016-05-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.

  12. Dynamic fe Model of Sitting Man Adjustable to Body Height, Body Mass and Posture Used for Calculating Internal Forces in the Lumbar Vertebral Disks

    NASA Astrophysics Data System (ADS)

    Pankoke, S.; Buck, B.; Woelfel, H. P.

    1998-08-01

    Long-term whole-body vibrations can cause degeneration of the lumbar spine. Therefore existing degeneration has to be assessed as well as industrial working places to prevent further damage. Hence, the mechanical stress in the lumbar spine—especially in the three lower vertebrae—has to be known. This stress can be expressed as internal forces. These internal forces cannot be evaluated experimentally, because force transducers cannot be implementated in the force lines because of ethical reasons. Thus it is necessary to calculate the internal forces with a dynamic mathematical model of sitting man.A two dimensional dynamic Finite Element model of sitting man is presented which allows calculation of these unknown internal forces. The model is based on an anatomic representation of the lower lumbar spine (L3-L5). This lumber spine model is incorporated into a dynamic model of the upper torso with neck, head and arms as well as a model of the body caudal to the lumbar spine with pelvis and legs. Additionally a simple dynamic representation of the viscera is used. All these parts are modelled as rigid bodies connected by linear stiffnesses. Energy dissipation is modelled by assigning modal damping ratio to the calculated undamped eigenvalues. Geometry and inertial properties of the model are determined according to human anatomy. Stiffnesses of the spine model are derived from static in-vitro experiments in references [1] and [2]. Remaining stiffness parameters and parameters for energy dissipation are determined by using parameter identification to fit measurements in reference [3]. The model, which is available in 3 different postures, allows one to adjust its parameters for body height and body mass to the values of the person for which internal forces have to be calculated.

  13. [Structural adjustment, cultural adjustment?].

    PubMed

    Dujardin, B; Dujardin, M; Hermans, I

    2003-12-01

    Over the last two decades, multiple studies have been conducted and many articles published about Structural Adjustment Programmes (SAPs). These studies mainly describe the characteristics of SAPs and analyse their economic consequences as well as their effects upon a variety of sectors: health, education, agriculture and environment. However, very few focus on the sociological and cultural effects of SAPs. Following a summary of SAP's content and characteristics, the paper briefly discusses the historical course of SAPs and the different critiques which have been made. The cultural consequences of SAPs are introduced and are described on four different levels: political, community, familial, and individual. These levels are analysed through examples from the literature and individual testimonies from people in the Southern Hemisphere. The paper concludes that SAPs, alongside economic globalisation processes, are responsible for an acute breakdown of social and cultural structures in societies in the South. It should be a priority, not only to better understand the situation and its determining factors, but also to intervene and act with strategies that support and reinvest in the social and cultural sectors, which is vital in order to allow for individuals and communities in the South to strengthen their autonomy and identify.

  14. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard

  15. System parameters for erythropoiesis control model: Comparison of normal values in human and mouse model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.

  16. Assessment of an adjustment factor to model radar range dependent error

    NASA Astrophysics Data System (ADS)

    Sebastianelli, S.; Russo, F.; Napolitano, F.; Baldini, L.

    2012-09-01

    Quantitative radar precipitation estimates are affected by errors determined by many causes such as radar miscalibration, range degradation, attenuation, ground clutter, variability of Z-R relation, variability of drop size distribution, vertical air motion, anomalous propagation and beam-blocking. Range degradation (including beam broadening and sampling of precipitation at an increasing altitude)and signal attenuation, determine a range dependent behavior of error. The aim of this work is to model the range-dependent error through an adjustment factor derived from the G/R ratio trend against the range, where G and R are the corresponding rain gauge and radar rainfall amounts computed at each rain gauge location. Since range degradation and signal attenuation effects are negligible close to the radar, resultsshowthatwithin 40 km from radar the overall range error is independent of the distance from Polar 55C and no range-correction is needed. Nevertheless, up to this distance, the G/R ratiocan showa concave trend with the range, which is due to the melting layer interception by the radar beam during stratiform events.

  17. Comparison of Two Foreign Body Retrieval Devices with Adjustable Loops in a Swine Model

    SciTech Connect

    Konya, Andras

    2006-12-15

    The purpose of the study was to compare two similar foreign body retrieval devices, the Texan{sup TM} (TX) and the Texan LONGhorn{sup TM} (TX-LG), in a swine model. Both devices feature a {<=}30-mm adjustable loop. Capture times and total procedure times for retrieving foreign bodies from the infrarenal aorta, inferior vena cava, and stomach were compared. All attempts with both devices (TX, n = 15; TX-LG, n = 14) were successful. Foreign bodies in the vasculature were captured quickly using both devices (mean {+-} SD, 88 {+-} 106 sec for TX vs 67 {+-} 42 sec for TX-LG) with no significant difference between them. The TX-LG, however, allowed significantly better capture times than the TX in the stomach (p = 0.022), Overall, capture times for the TX-LG were significantly better than for the TX (p = 0.029). There was no significant difference between the total procedure times in any anatomic region. TX-LG performed significantly better than the TX in the stomach and therefore overall. The better torque control and maneuverability of TX-LG resulted in better performance in large anatomic spaces.

  18. Adjusting for Health Status in Non-Linear Models of Health Care Disparities

    PubMed Central

    Cook, Benjamin L.; McGuire, Thomas G.; Meara, Ellen; Zaslavsky, Alan M.

    2009-01-01

    This article compared conceptual and empirical strengths of alternative methods for estimating racial disparities using non-linear models of health care access. Three methods were presented (propensity score, rank and replace, and a combined method) that adjust for health status while allowing SES variables to mediate the relationship between race and access to care. Applying these methods to a nationally representative sample of blacks and non-Hispanic whites surveyed in the 2003 and 2004 Medical Expenditure Panel Surveys (MEPS), we assessed the concordance of each of these methods with the Institute of Medicine (IOM) definition of racial disparities, and empirically compared the methods' predicted disparity estimates, the variance of the estimates, and the sensitivity of the estimates to limitations of available data. The rank and replace and combined methods (but not the propensity score method) are concordant with the IOM definition of racial disparities in that each creates a comparison group with the appropriate marginal distributions of health status and SES variables. Predicted disparities and prediction variances were similar for the rank and replace and combined methods, but the rank and replace method was sensitive to limitations on SES information. For all methods, limiting health status information significantly reduced estimates of disparities compared to a more comprehensive dataset. We conclude that the two IOM-concordant methods were similar enough that either could be considered in disparity predictions. In datasets with limited SES information, the combined method is the better choice. PMID:20352070

  19. Adjusting for Health Status in Non-Linear Models of Health Care Disparities.

    PubMed

    Cook, Benjamin L; McGuire, Thomas G; Meara, Ellen; Zaslavsky, Alan M

    2009-03-01

    This article compared conceptual and empirical strengths of alternative methods for estimating racial disparities using non-linear models of health care access. Three methods were presented (propensity score, rank and replace, and a combined method) that adjust for health status while allowing SES variables to mediate the relationship between race and access to care. Applying these methods to a nationally representative sample of blacks and non-Hispanic whites surveyed in the 2003 and 2004 Medical Expenditure Panel Surveys (MEPS), we assessed the concordance of each of these methods with the Institute of Medicine (IOM) definition of racial disparities, and empirically compared the methods' predicted disparity estimates, the variance of the estimates, and the sensitivity of the estimates to limitations of available data. The rank and replace and combined methods (but not the propensity score method) are concordant with the IOM definition of racial disparities in that each creates a comparison group with the appropriate marginal distributions of health status and SES variables. Predicted disparities and prediction variances were similar for the rank and replace and combined methods, but the rank and replace method was sensitive to limitations on SES information. For all methods, limiting health status information significantly reduced estimates of disparities compared to a more comprehensive dataset. We conclude that the two IOM-concordant methods were similar enough that either could be considered in disparity predictions. In datasets with limited SES information, the combined method is the better choice.

  20. A Lumped Parameter Model for Feedback Studies in Tokamaks

    NASA Astrophysics Data System (ADS)

    Chance, M. S.; Chu, M. S.; Okabayashi, M.; Glasser, A. H.

    2004-11-01

    A lumped circuit model of the feedback stabilization studies in tokamaks is calculated. This work parallels the formulation by Boozer^a, is analogous to the studies done on axisymmetric modes^b, and generalizes the cylindrical model^c. The lumped circuit parameters are derived from the DCON derived eigenfunctions of the plasma, the resistive shell and the feedback coils. The inductances are calculated using the VACUUM code which is designed to calculate the responses between the various elements in the feedback system. The results are compared with the normal mode^d and the system identification^e approaches. ^aA.H. Boozer, Phys. Plasmas 5, 3350 (1998). ^b E.A. Lazarus et al., Nucl. Fusion 30, 111 (1990). ^c M. Okabayashi et al., Nucl. Fusion 38, 1607 (1998). ^dM.S. Chu et al., Nucl. Fusion 43, 441 (2003). ^eY.Q. Liu et al., Phys. Plasmas 7, 3681 (2000).

  1. PET-Specific Parameters and Radiotracers in Theoretical Tumour Modelling

    PubMed Central

    Marcu, Loredana G.; Bezak, Eva

    2015-01-01

    The innovation of computational techniques serves as an important step toward optimized, patient-specific management of cancer. In particular, in silico simulation of tumour growth and treatment response may eventually yield accurate information on disease progression, enhance the quality of cancer treatment, and explain why certain therapies are effective where others are not. In silico modelling is demonstrated to considerably benefit from information obtainable with PET and PET/CT. In particular, models have successfully integrated tumour glucose metabolism, cell proliferation, and cell oxygenation from multiple tracers in order to simulate tumour behaviour. With the development of novel radiotracers to image additional tumour phenomena, such as pH and gene expression, the value of PET and PET/CT data for use in tumour models will continue to grow. In this work, the use of PET and PET/CT information in in silico tumour models is reviewed. The various parameters that can be obtained using PET and PET/CT are detailed, as well as the radiotracers that may be used for this purpose, their utility, and limitations. The biophysical measures used to quantify PET and PET/CT data are also described. Finally, a list of in silico models that incorporate PET and/or PET/CT data is provided and reviewed. PMID:25788973

  2. An assessment of the ICE6G_C (VM5A) glacial isostatic adjustment model

    NASA Astrophysics Data System (ADS)

    Purcell, Anthony; Tregoning, Paul; Dehecq, Amaury

    2016-04-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a) [Peltier et al., 2015, Argus et al. 2014] is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology and, of course, geodynamics (Earth rheology studies). In this presentation I will assess some aspects of the ICE6G_C(VM5a) model and the accompanying published data sets. I will demonstrate that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Further, the published spherical harmonic coefficients - which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA) - will be shown to contain excessive power for degree ≥ 90, to be physically implausible and to not represent accurately the ICE6G_C(VM5a) model. The excessive power in the high degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. [2011] is applied but, when correct Stokes' coefficients are used, the empirical relationship will be shown to produce excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. [2011]. Finally, a global radial velocity field for the present-day GIA signal, and corresponding Stoke's coefficients will be presented for the ICE6GC ice model history using the VM5a rheology model. These results have been obtained using the ANU group's CALSEA software package and can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals without any of the shortcomings of the previously published data-sets. We denote the new data sets ICE6G_ANU.

  3. Variational methods to estimate terrestrial ecosystem model parameters

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  4. Assessing WRF Model Parameter Sensitivity and Optimization: A Case Study with 5-day Summer Precipitation Forecasting in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Quan, JiPing

    2015-04-01

    A global sensitivity analysis method was used to identify the parameters of the Weather Research and Forecasting (WRF) model that exert the most influence on precipitation forecasting skill. Twenty-three adjustable parameters were selected from seven physical components of the WRF model. The sensitivity was evaluated based on skill scores calculated over nine 5-day precipitation forecasts during the summer seasons from 2008 to 2010 in the Greater Beijing Area in North China. We found that 8 parameters are more sensitive than others. Storm type seems to have no impact on the list of sensitive parameters, but does influence the degree of sensitivity. We also examined the physical interpretation of the sensitivity analysis results. The results of this study are used for further optimization of the WRF model parameters to improve WRF predictive performance. The improving rate has arrived at 17% for new parameter values, showing the screening and optimization are very effective in reducing the uncertainty of WRF parameters.

  5. Pressure pulsation in roller pumps: a validated lumped parameter model.

    PubMed

    Moscato, Francesco; Colacino, Francesco M; Arabia, Maurizio; Danieli, Guido A

    2008-11-01

    During open-heart surgery roller pumps are often used to keep the circulation of blood through the patient body. They present numerous key features, but they suffer from several limitations: (a) they normally deliver uncontrolled pulsatile inlet and outlet pressure; (b) blood damage appears to be more than that encountered with centrifugal pumps. A lumped parameter mathematical model of a roller pump (Sarns 7000, Terumo CVS, Ann Arbor, MI, USA) was developed to dynamically simulate pressures at the pump inlet and outlet in order to clarify the uncontrolled pulsation mechanism. Inlet and outlet pressures obtained by the mathematical model have been compared with those measured in various operating conditions: different rollers' rotating speed, different tube occlusion rates, and different clamping degree at the pump inlet and outlet. Model results agree with measured pressure waveforms, whose oscillations are generated by the tube compression/release mechanism during the rollers' engaging and disengaging phases. Average Euclidean Error (AEE) was 20mmHg and 33mmHg for inlet and outlet pressure estimates, respectively. The normalized AEE never exceeded 0.16. The developed model can be exploited for designing roller pumps with improved performances aimed at reducing the undesired pressure pulsation.

  6. Analysing DNA structural parameters using a mesoscopic model

    NASA Astrophysics Data System (ADS)

    Amarante, Tauanne D.; Weber, Gerald

    2014-03-01

    The Peyrard-Bishop model is a mesoscopic approximation to model DNA and RNA molecules. Several variants of this model exists, from 3D Hamiltonians, including torsional angles, to simpler 2D versions. Currently, we are able to parametrize the 2D variants of the model which allows us to extract important information about the molecule. For example, with this technique we were able recently to obtain the hydrogen bonds of RNA from melting temperatures, which previously were obtainable only from NMR measurements. Here, we take the 3D torsional Hamiltonian and set the angles to zero. Curiously, in doing this we do not recover the traditional 2D Hamiltonians. Instead, we obtain a different 2D Hamiltonian which now includes a base pair step distance, commonly known as rise. A detailed knowledge of the rise distance is important as it determines the overall length of the DNA molecule. This 2D Hamiltonian provides us with the exciting prospect of obtaining DNA structural parameters from melting temperatures. Our results of the rise distance at low salt concentration are in good qualitative agreement with those from several published x-ray measurements. We also found an important dependence of the rise distance with salt concentration. In contrast to our previous calculations, the elastic constants now show little dependence with salt concentrations which appears to be closer to what is seen experimentally in DNA flexibility experiments.

  7. Incorporation of shuttle CCT parameters in computer simulation models

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    1990-01-01

    Computer simulations of shuttle missions have become increasingly important during recent years. The complexity of mission planning for satellite launch and repair operations which usually involve EVA has led to the need for accurate visibility and access studies. The PLAID modeling package used in the Man-Systems Division at Johnson currently has the necessary capabilities for such studies. In addition, the modeling package is used for spatial location and orientation of shuttle components for film overlay studies such as the current investigation of the hydrogen leaks found in the shuttle flight. However, there are a number of differences between the simulation studies and actual mission viewing. These include image blur caused by the finite resolution of the CCT monitors in the shuttle and signal noise from the video tubes of the cameras. During the course of this investigation the shuttle CCT camera and monitor parameters are incorporated into the existing PLAID framework. These parameters are specific for certain camera/lens combinations and the SNR characteristics of these combinations are included in the noise models. The monitor resolution is incorporated using a Gaussian spread function such as that found in the screen phosphors in the shuttle monitors. Another difference between the traditional PLAID generated images and actual mission viewing lies in the lack of shadows and reflections of light from surfaces. Ray tracing of the scene explicitly includes the lighting and material characteristics of surfaces. The results of some preliminary studies using ray tracing techniques for the image generation process combined with the camera and monitor effects are also reported.

  8. Impact of parameter uncertainty on carbon sequestration modeling

    NASA Astrophysics Data System (ADS)

    Bandilla, K.; Celia, M. A.

    2013-12-01

    Geologic carbon sequestration through injection of supercritical carbon dioxide (CO2) into the subsurface is one option to reduce anthropogenic CO¬2 emissions. Widespread industrial-scale deployment, on the order of giga-tonnes of CO2 injected per year, will be necessary for carbon sequestration to make a significant contribution to solving the CO2 problem. Deep saline formations are suitable targets for CO2 sequestration due to their large storage capacity, high injectivity, and favorable pressure and temperature regimes. Due to the large areal extent of saline formations, and the need to inject very large amounts of CO2, multiple sequestration operations are likely to be developed in the same formation. The injection-induced migration of both CO2 and resident formation fluids (brine) needs to be predicted to determine the feasibility of industrial-scale deployment of carbon sequestration. Due to the larger spatial scale of the domain, many of the modeling parameters (e.g., permeability) will be highly uncertain. In this presentation we discuss a sensitivity analysis of both pressure response and CO2 plume migration to variations of model parameters such as permeability, compressibility and temperature. The impact of uncertainty in the stratigraphic succession is also explored. The sensitivity analysis is conducted using a numerical vertically-integrated modeling approach. The Illinois Basin, USA is selected as the test site for this study, due to its large storage capacity and large number of stationary CO2 sources. As there is currently only one active CO2 injection operation in the Illinois Basin, a hypothetical injection scenario is used, where CO2 is injected at the locations of large CO2 emitters related to electricity generation, ethanol production and hydrocarbon refinement. The Area of Review (AoR) is chosen as the comparison metric, as it includes both the CO2 plume size and pressure response.

  9. Dietary reference intakes for zinc may require adjustment for phytate intake based upon model predictions.

    PubMed

    Hambidge, K Michael; Miller, Leland V; Westcott, Jamie E; Krebs, Nancy F

    2008-12-01

    The quantity of total dietary zinc (Zn) and phytate are the principal determinants of the quantity of absorbed Zn. Recent estimates of Dietary Reference Intakes (DRI) for Zn by the Institute of Medicine (IOM) were based on data from low-phytate or phytate-free diets. The objective of this project was to estimate the effects of increasing quantities of dietary phytate on these DRI. We used a trivariate model of the quantity of Zn absorbed as a function of dietary Zn and phytate with updated parameters to estimate the phytate effect on the Estimated Average Requirement (EAR) and Recommended Daily Allowance for Zn for both men and women. The EAR predicted from the model at 0 phytate was very close to the EAR of the IOM. The addition of 1000 mg phytate doubled the EAR and adding 2000 mg phytate tripled the EAR. The model also predicted that the EAR for men and women could not be attained with phytate:Zn molar ratios > 11:1 and 15:1, respectively. The phytate effect on upper limits (UL) was predicted by first estimating the quantity of absorbed Zn corresponding to the UL of 40 mg for phytate-free diets, which is 6.4 mg Zn/d. Extrapolation of the model suggested, for example, that with 900 mg/d phytate, 100 mg dietary Zn is required to attain 6.4 mg absorbed Zn/d. Experimental studies with higher Zn intakes are required to test these predictions.

  10. Glacial isostatic adjustment associated with the Barents Sea ice sheet: A modelling inter-comparison

    NASA Astrophysics Data System (ADS)

    Auriac, A.; Whitehouse, P. L.; Bentley, M. J.; Patton, H.; Lloyd, J. M.; Hubbard, A.

    2016-09-01

    The 3D geometrical evolution of the Barents Sea Ice Sheet (BSIS), particularly during its late-glacial retreat phase, remains largely ambiguous due to the paucity of direct marine- and terrestrial-based evidence constraining its horizontal and vertical extent and chronology. One way of validating the numerous BSIS reconstructions previously proposed is to collate and apply them under a wide range of Earth models and to compare prognostic (isostatic) output through time with known relative sea-level (RSL) data. Here we compare six contrasting BSIS load scenarios via a spherical Earth system model and derive a best-fit, χ2 parameter using RSL data from the four main terrestrial regions within the domain: Svalbard, Franz Josef Land, Novaya Zemlya and northern Norway. Poor χ2 values allow two load scenarios to be dismissed, leaving four that agree well with RSL observations. The remaining four scenarios optimally fit the RSL data when combined with Earth models that have an upper mantle viscosity of 0.2-2 × 1021 Pa s, while there is less sensitivity to the lithosphere thickness (ranging from 71 to 120 km) and lower mantle viscosity (spanning 1-50 × 1021 Pa s). GPS observations are also compared with predictions of present-day uplift across the Barents Sea. Key locations where relative sea-level and GPS data would prove critical in constraining future ice-sheet modelling efforts are also identified.

  11. Adolescent Sibling Relationship Quality and Adjustment: Sibling Trustworthiness and Modeling, as Factors Directly and Indirectly Influencing These Associations

    ERIC Educational Resources Information Center

    Gamble, Wendy C.; Yu, Jeong Jin; Kuehn, Emily D.

    2011-01-01

    The main goal of this study was to examine the direct and moderating effects of trustworthiness and modeling on adolescent siblings' adjustment. Data were collected from 438 families including a mother, a younger sibling in fifth, sixth, or seventh grade (M = 11.6 years), and an older sibling (M = 14.3 years). Respondents completed Web-based…

  12. Rejection, Feeling Bad, and Being Hurt: Using Multilevel Modeling to Clarify the Link between Peer Group Aggression and Adjustment

    ERIC Educational Resources Information Center

    Rulison, Kelly L.; Gest, Scott D.; Loken, Eric; Welsh, Janet A.

    2010-01-01

    The association between affiliating with aggressive peers and behavioral, social and psychological adjustment was examined. Students initially in 3rd, 4th, and 5th grade (N = 427) were followed biannually through 7th grade. Students' peer-nominated groups were identified. Multilevel modeling was used to examine the independent contributions of…

  13. Internal Working Models and Adjustment of Physically Abused Children: The Mediating Role of Self-Regulatory Abilities

    ERIC Educational Resources Information Center

    Hawkins, Amy L.; Haskett, Mary E.

    2014-01-01

    Background: Abused children's internal working models (IWM) of relationships are known to relate to their socioemotional adjustment, but mechanisms through which negative representations increase vulnerability to maladjustment have not been explored. We sought to expand the understanding of individual differences in IWM of abused children and…

  14. Patterns of Children's Adrenocortical Reactivity to Interparental Conflict and Associations with Child Adjustment: A Growth Mixture Modeling Approach

    ERIC Educational Resources Information Center

    Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.

    2013-01-01

    Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…

  15. The Effectiveness of the Strength-Centered Career Adjustment Model for Dual-Career Women in Taiwan

    ERIC Educational Resources Information Center

    Wang, Yu-Chen; Tien, Hsiu-Lan Shelley

    2011-01-01

    The authors investigated the effectiveness of a Strength-Centered Career Adjustment Model for dual-career women (N = 28). Fourteen women in the experimental group received strength-centered career counseling for 6 to 8 sessions; the 14 women in the control group received test services in 1 to 2 sessions. All participants completed the Personal…

  16. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  17. Modeling parameter extraction for DNQ-novolak thick film resists

    NASA Astrophysics Data System (ADS)

    Henderson, Clifford L.; Scheer, Steven A.; Tsiartas, Pavlos C.; Rathsack, Benjamen M.; Sagan, John P.; Dammel, Ralph R.; Erdmann, Andreas; Willson, C. Grant

    1998-06-01

    Optical lithography with special thick film DNQ-novolac photoresists have been practiced for many years to fabricate microstructures that require feature heights ranging from several to hundreds of microns such as thin film magnetic heads. It is common in these thick film photoresist systems to observe interesting non-uniform profiles with narrow regions near the top surface of the film that transition into broader and more concave shapes near the bottom of the resist profile. A number of explanations have been proposed for these various observations including the formation of `dry skins' at the resist surface and the presence of solvent gradients in the film which serve to modify the local development rate of the photoresist. There have been few detailed experimental studies of the development behavior of thick films resists. This has been due to part to the difficulty in studying these films with conventional dissolution rate monitors (DRMs). In general, this lack of experimental data along with other factors has made simulation and modeling of thick film resist performance difficult. As applications such as thin film head manufacturing drive to smaller features with higher aspect ratios, the need for accurate thick film simulation capability continues to grow. A new multi-wavelength DRM tool has been constructed and used in conjunction with a resist bleaching tool and rigorous parameter extraction techniques to establish exposure and development parameters for two thick film resists, AZTM 4330-RS and AZTM 9200. Simulations based on these parameters show good agreement to resist profiles for these two resists.

  18. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. R.

    2013-04-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  19. Validity of methods for model selection, weighting for model uncertainty, and small sample adjustment in capture-recapture estimation.

    PubMed

    Hook, E B; Regal, R R

    1997-06-15

    In log-linear capture-recapture approaches to population size, the method of model selection may have a major effect upon the estimate. In addition, the estimate may also be very sensitive if certain cells are null or very sparse, even with the use of multiple sources. The authors evaluated 1) various approaches to the issue of model uncertainty and 2) a small sample correction for three or more sources recently proposed by Hook and Regal. The authors compared the estimates derived using 1) three different information criteria that included Akaike's Information Criterion (AIC) and two alternative formulations of the Bayesian Information Criterion (BIC), one proposed by Draper ("two pi") and one by Schwarz ("not two pi"); 2) two related methods of weighting estimates associated with models; 3) the independent model; and 4) the saturated model, with the known totals in 20 different populations studied by five separate groups of investigators. For each method, we also compared the estimate derived with or without the proposed small sample correction. At least in these data sets, the use of AIC appeared on balance to be preferable. The BIC formulation suggested by Draper appeared slightly preferable to that suggested by Schwarz. Adjustment for model uncertainty appears to improve results slightly. The proposed small sample correction appeared to diminish relative log bias but only when sparse cells were present. Otherwise, its use tended to increase relative log bias. Use of the saturated model (with or without the small sample correction) appears to be optimal if the associated interval is not uselessly large, and if one can plausibly exclude an all-source interaction. All other approaches led to an estimate that was too low by about one standard deviation.

  20. Modeling soil detachment capacity by rill flow using hydraulic parameters

    NASA Astrophysics Data System (ADS)

    Wang, Dongdong; Wang, Zhanli; Shen, Nan; Chen, Hao

    2016-04-01

    The relationship between soil detachment capacity (Dc) by rill flow and hydraulic parameters (e.g., flow velocity, shear stress, unit stream power, stream power, and unit energy) at low flow rates is investigated to establish an accurate experimental model. Experiments are conducted using a 4 × 0.1 m rill hydraulic flume with a constant artificial roughness on the flume bed. The flow rates range from 0.22 × 10-3 m2 s-1 to 0.67 × 10-3 m2 s-1, and the slope gradients vary from 15.8% to 38.4%. Regression analysis indicates that the Dc by rill flow can be predicted using the linear equations of flow velocity, stream power, unit stream power, and unit energy. Dc by rill flow that is fitted to shear stress can be predicted with a power function equation. Predictions based on flow velocity, unit energy, and stream power are powerful, but those based on shear stress, especially on unit stream power, are relatively poor. The prediction based on flow velocity provides the best estimates of Dc by rill flow because of the simplicity and availability of its measurements. Owing to error in measuring flow velocity at low flow rates, the predictive abilities of Dc by rill flow using all hydraulic parameters are relatively lower in this study compared with the results of previous research. The measuring accuracy of experiments for flow velocity should be improved in future research.

  1. Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model

    NASA Astrophysics Data System (ADS)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.

    2012-12-01

    Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root

  2. Maximum likelihood identification of aircraft parameters with unsteady aerodynamic modelling

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Wells, W. R.

    1979-01-01

    A simplified aerodynamic force model based on the physical principle of Prandtl's lifting line theory and trailing vortex concept has been developed to account for unsteady aerodynamic effects in aircraft dynamics. Longitudinal equations of motion have been modified to include these effects. The presence of convolution integrals in the modified equations of motion led to a frequency domain analysis utilizing Fourier transforms. This reduces the integro-differential equations to relatively simple algebraic equations, thereby reducing computation time significantly. A parameter extraction program based on the maximum likelihood estimation technique is developed in the frequency domain. The extraction algorithm contains a new scheme for obtaining sensitivity functions by using numerical differentiation. The paper concludes with examples using computer generated and real flight data

  3. Sound propagation and absorption in foam - A distributed parameter model.

    NASA Technical Reports Server (NTRS)

    Manson, L.; Lieberman, S.

    1971-01-01

    Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.

  4. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  5. The effects of coping on adjustment: Re-examining the goodness of fit model of coping effectiveness.

    PubMed

    Masel, C N; Terry, D J; Gribble, M

    1996-01-01

    Abstract The primary aim of the present study was to examine the extent to which the effects of coping on adjustment are moderated by levels of event controllability. Specifically, the research tested two revisions to the goodness of fit model of coping effectiveness. First, it was hypothesized that the effects of problem management coping (but not problem appraisal coping) would be moderated by levels of event controllability. Second, it was hypothesized that the effects of emotion-focused coping would be moderated by event controllability, but only in the acute phase of a stressful encounter. To test these predictions, a longitudinal study was undertaken (185 undergraduate students participated in all three stages of the research). Measures of initial adjustment (low depression and coping efficacy) were obtained at Time 1. Four weeks later (Time 2), coping responses to a current or a recent stressor were assessed. Based on subjects' descriptions of the event, objective and subjective measures of event controllability were also obtained. Measures of concurrent and subsequent adjustment were obtained at Times 2 and 3 (two weeks later), respectively. There was only weak support for the goodness of fit model of coping effectiveness. The beneficial effects of a high proportion of problem management coping (relative to total coping efforts) on Time 3 perceptions of coping efficacy were more evident in high control than in low control situations. Other results of the research revealed that, irrespective of the controllability of the event, problem appraisal coping strategies and emotion-focused strategies (escapism and self-denigration) were associated with high and low levels of concurrent adjustment, respectively. The effects of these coping responses on subsequent adjustment were mediated through concurrent levels of adjustment.

  6. Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand

    NASA Astrophysics Data System (ADS)

    Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat

    2016-07-01

    The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the

  7. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    PubMed

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role.

  8. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    PubMed

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role. PMID:19759453

  9. Inverse Modeling of Hydrologic Parameters Using Surface Flux and Streamflow Observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L.

    2012-12-01

    This study aims at demonstrating the possibility of calibrating hydrologic parameters using surface flux and streamflow observations in version 4 of the Community Land Model (CLM4). Previously we showed that surface flux and streamflow calculations are sensitive to several key hydrologic parameters in CLM4, and discussed the necessity and possibility of parameter calibration. In this study, we evaluate performances of several different inversion strategies, including least-square fitting, quasi Monte-Carlo (QMC) sampling based Bayesian updating, and a Markov-Chain Monte-Carlo (MCMC) Bayesian inversion approach. The parameters to be calibrated include the surface and subsurface runoff generation parameters and vadose zone soil water parameters. We discuss the effects of surface flux and streamflow observations on the inversion results and compare their consistency and reliability using both monthly and daily observations at various flux tower and MOPEX sites. We find that the sampling-based stochastic inversion approaches behaved consistently - as more information comes in, the predictive intervals of the calibrated parameters as well as the misfits between the calculated and observed observations decrease. In general, the parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or streamflow observations. We also evaluated the possibility of probabilistic model averaging for more consistent parameter estimation.

  10. Constraints of GRACE on the Ice Model and Mantle Rheology in Glacial Isostatic Adjustment Modeling in North-America

    NASA Astrophysics Data System (ADS)

    van der Wal, W.; Wu, P.; Sideris, M.; Wang, H.

    2009-05-01

    GRACE satellite data offer homogeneous coverage of the area covered by the former Laurentide ice sheet. The secular gravity rate estimated from the GRACE data can therefore be used to constrain the ice loading history in Laurentide and, to a lesser extent, the mantle rheology in a GIA model. The objective of this presentation is to find a best fitting global ice model and use it to study how the ice model can be modified to fit a composite rheology, in which creep rates from a linear and non-linear rheology are added. This is useful because all the ice models constructed from GIA assume that mantle rheology is linear, but creep experiments on rocks show that nonlinear rheology may be the dominant mechanism in some parts of the mantle. We use CSR release 4 solutions from August 2002 to October 2008 with continental water storage effects removed by the GLDAS model and filtering with a destriping and Gaussian filter. The GIA model is a radially symmetric incompressible Maxwell Earth, with varying upper and lower mantle viscosity. Gravity rate misfit values are computed for with a range of viscosity values with the ICE-3G, ICE-4G and ICE-5G models. The best fit is shown for models with ICE-3G and ICE-4G, and the ICE-4G model is selected for computations with a so-called composite rheology. For the composite rheology, the Coupled Laplace Finite-Element Method is used to compute the GIA response of a spherical self-gravitating incompressible Maxwell Earth. The pre-stress exponent (A) derived from a uni- axial stress experiment is varied between 3.3 x 10-34/10-35/10-36 Pa-3s-1, the Newtonian viscosity η is varied between 1 and 3 x 1021 Pa-s, and the stress exponent is taken to be 3. Composite rheology in general results in geoid rates that are too small compared to GRACE observations. Therefore, simple modifications of the ICE-4G history are investigated by scaling ice heights or delaying glaciation. It is found that a delay in glaciation is a better way to adjust ice

  11. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    SciTech Connect

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes.

  12. Sensitivity of numerical dispersion modeling to explosive source parameters

    SciTech Connect

    Baskett, R.L. ); Cederwall, R.T. )

    1991-02-13

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs.

  13. Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2007-12-01

    This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.

  14. Fundamental parameters of pulsating stars from atmospheric models

    NASA Astrophysics Data System (ADS)

    Barcza, S.

    2006-12-01

    A purely photometric method is reviewed to determine distance, mass, equilibrium temperature, and luminosity of pulsating stars by using model atmospheres and hydrodynamics. T Sex is given as an example: on the basis of Kurucz atmospheric models and UBVRI (in both Johnson and Kron-Cousins systems) data, variation of angular diameter, effective temperature, and surface gravity is derived as a function of phase, mass M=(0.76± 0.09) M⊙, distance d=530± 67 pc, Rmax=2.99R⊙, Rmin=2.87R⊙, magnitude averaged visual absolute brightness < MVmag>=1.17± 0.26 mag are found. During a pulsation cycle four standstills of the atmosphere are pointed out indicating the occurrence of two shocks in the atmosphere. The derived equilibrium temperature Teq=7781 K and luminosity (28.3± 8.8)L⊙ locate T Sex on the blue edge of the instability strip in a theoretical Hertzsprung-Russell diagram. The differences of the physical parameters from this study and Liu & Janes (1990) are discussed.

  15. Mechanical models for insect locomotion: stability and parameter studies

    NASA Astrophysics Data System (ADS)

    Schmitt, John; Holmes, Philip

    2001-08-01

    We extend the analysis of simple models for the dynamics of insect locomotion in the horizontal plane, developed in [Biol. Cybern. 83 (6) (2000) 501] and applied to cockroach running in [Biol. Cybern. 83 (6) (2000) 517]. The models consist of a rigid body with a pair of effective legs (each representing the insect’s support tripod) placed intermittently in ground contact. The forces generated may be prescribed as functions of time, or developed by compression of a passive leg spring. We find periodic gaits in both cases, and show that prescribed (sinusoidal) forces always produce unstable gaits, unless they are allowed to rotate with the body during stride, in which case a (small) range of physically unrealistic stable gaits does exist. Stability is much more robust in the passive spring case, in which angular momentum transfer at touchdown/liftoff can result in convergence to asymptotically straight motions with bounded yaw, fore-aft and lateral velocity oscillations. Using a non-dimensional formulation of the equations of motion, we also develop exact and approximate scaling relations that permit derivation of gait characteristics for a range of leg stiffnesses, lengths, touchdown angles, body masses and inertias, from a single gait family computed at ‘standard’ parameter values.

  16. Estimates of genetic parameters for growth traits in Brahman cattle using random regression and multitrait models.

    PubMed

    Bertipaglia, T S; Carreño, L O D; Aspilcueta-Borquis, R R; Boligon, A A; Farah, M M; Gomes, F J; Machado, C H C; Rey, F S B; da Fonseca, R

    2015-08-01

    Random regression models (RRM) and multitrait models (MTM) were used to estimate genetic parameters for growth traits in Brazilian Brahman cattle and to compare the estimated breeding values obtained by these 2 methodologies. For RRM, 78,641 weight records taken between 60 and 550 d of age from 16,204 cattle were analyzed, and for MTM, the analysis consisted of 17,385 weight records taken at the same ages from 12,925 cattle. All models included the fixed effects of contemporary group and the additive genetic, maternal genetic, and animal permanent environmental effects and the quadratic effect of age at calving (AAC) as covariate. For RRM, the AAC was nested in the animal's age class. The best RRM considered cubic polynomials and the residual variance heterogeneity (5 levels). For MTM, the weights were adjusted for standard ages. For RRM, additive heritability estimates ranged from 0.42 to 0.75, and for MTM, the estimates ranged from 0.44 to 0.72 for both models at 60, 120, 205, 365, and 550 d of age. The maximum maternal heritability estimate (0.08) was at 140 d for RRM, but for MTM, it was highest at weaning (0.09). The magnitude of the genetic correlations was generally from moderate to high. The RRM adequately modeled changes in variance or covariance with age, and provided there was sufficient number of samples, increased accuracy in the estimation of the genetic parameters can be expected. Correlation of bull classifications were different in both methods and at all the ages evaluated, especially at high selection intensities, which could affect the response to selection. PMID:26440161

  17. Filling Gaps in the Acculturation Gap-Distress Model: Heritage Cultural Maintenance and Adjustment in Mexican-American Families.

    PubMed

    Telzer, Eva H; Yuen, Cynthia; Gonzales, Nancy; Fuligni, Andrew J

    2016-07-01

    The acculturation gap-distress model purports that immigrant children acculturate faster than do their parents, resulting in an acculturation gap that leads to family and youth maladjustment. However, empirical support for the acculturation gap-distress model has been inconclusive. In the current study, 428 Mexican-American adolescents (50.2 % female) and their primary caregivers independently completed questionnaires assessing their levels of American and Mexican cultural orientation, family functioning, and youth adjustment. Contrary to the acculturation gap-distress model, acculturation gaps were not associated with poorer family or youth functioning. Rather, adolescents with higher levels of Mexican cultural orientations showed positive outcomes, regardless of their parents' orientations to either American or Mexican cultures. Findings suggest that youths' heritage cultural maintenance may be most important for their adjustment.

  18. Verification and adjustment of regional regression models for urban storm-runoff quality using data collected in Little Rock, Arkansas

    USGS Publications Warehouse

    Barks, C.S.

    1995-01-01

    Storm-runoff water-quality data were used to verify and, when appropriate, adjust regional regression models previously developed to estimate urban storm- runoff loads and mean concentrations in Little Rock, Arkansas. Data collected at 5 representative sites during 22 storms from June 1992 through January 1994 compose the Little Rock data base. Comparison of observed values (0) of storm-runoff loads and mean concentrations to the predicted values (Pu) from the regional regression models for nine constituents (chemical oxygen demand, suspended solids, total nitrogen, total ammonia plus organic nitrogen as nitrogen, total phosphorus, dissolved phosphorus, total recoverable copper, total recoverable lead, and total recoverable zinc) shows large prediction errors ranging from 63 to several thousand percent. Prediction errors for six of the regional regression models are less than 100 percent, and can be considered reasonable for water-quality models. Differences between 0 and Pu are due to variability in the Little Rock data base and error in the regional models. Where applicable, a model adjustment procedure (termed MAP-R-P) based upon regression with 0 against Pu was applied to improve predictive accuracy. For 11 of the 18 regional water-quality models, 0 and Pu are significantly correlated, that is much of the variation in 0 is explained by the regional models. Five of these 11 regional models consistently overestimate O; therefore, MAP-R-P can be used to provide a better estimate. For the remaining seven regional models, 0 and Pu are not significanfly correlated, thus neither the unadjusted regional models nor the MAP-R-P is appropriate. A simple estimator, such as the mean of the observed values may be used if the regression models are not appropriate. Standard error of estimate of the adjusted models ranges from 48 to 130 percent. Calibration results may be biased due to the limited data set sizes in the Little Rock data base. The relatively large values of

  19. A stress and coping model of adjustment to caring for an adult with mental illness.

    PubMed

    Mackay, Christina; Pakenham, Kenneth I

    2012-08-01

    This study investigated the utility of a stress and coping framework for identifying factors associated with adjustment to informal caregiving to adults with mental illness. Relations between stress and coping predictors and negative (distress) and positive (positive affect, life satisfaction, benefit finding, health) carer adjustment outcomes were examined. A total of 114 caregivers completed questionnaires. Predictors included relevant background variables (carer and care recipient characteristics and caregiving context), coping resources (optimism, social support, carer-care recipient relationship quality), appraisal (threat, control, challenge) and coping strategies (problem-focused, avoidance, acceptance, meaning-focused). Results indicated that after controlling for relevant background variables (burden, caregiving frequency, care recipient symptom unpredictability), better caregiver adjustment was related to higher social support and optimism, better quality of carer-care recipient relationship, lower threat and higher challenge appraisals, and less reliance on avoidance coping, as hypothesised. Coping resources emerged as the most consistent predictor of adjustment. Findings support the utility of stress and coping theory in identifying risk and protective factors associated with adaptation to caring for an adult with mental illness.

  20. Divorce Stress and Adjustment Model: Locus of Control and Demographic Predictors.

    ERIC Educational Resources Information Center

    Barnet, Helen Smith

    This study depicts the divorce process over three time periods: predivorce decision phase, divorce proper, and postdivorce. Research has suggested that persons with a more internal locus of control experience less intense and shorter intervals of stress during the divorce proper and better postdivorce adjustment than do persons with a more…

  1. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    ERIC Educational Resources Information Center

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  2. A Structural Equation Modeling Approach to the Study of Stress and Psychological Adjustment in Emerging Adults

    ERIC Educational Resources Information Center

    Asberg, Kia K.; Bowers, Clint; Renk, Kimberly; McKinney, Cliff

    2008-01-01

    Today's society puts constant demands on the time and resources of all individuals, with the resulting stress promoting a decline in psychological adjustment. Emerging adults are not exempt from this experience, with an alarming number reporting excessive levels of stress and stress-related problems. As a result, the present study addresses the…

  3. An Adaptive Sequential Design for Model Discrimination and Parameter Estimation in Non-Linear Nested Models

    SciTech Connect

    Tommasi, C.; May, C.

    2010-09-30

    The DKL-optimality criterion has been recently proposed for the dual problem of model discrimination and parameter estimation, for the case of two rival models. A sequential version of the DKL-optimality criterion is herein proposed in order to discriminate and efficiently estimate more than two nested non-linear models. Our sequential method is inspired by the procedure of Biswas and Chaudhuri (2002), which is however useful only in the set up of nested linear models.

  4. Simultaneous model discrimination and parameter estimation in dynamic models of cellular systems

    PubMed Central

    2013-01-01

    Background Model development is a key task in systems biology, which typically starts from an initial model candidate and, involving an iterative cycle of hypotheses-driven model modifications, leads to new experimentation and subsequent model identification steps. The final product of this cycle is a satisfactory refined model of the biological phenomena under study. During such iterative model development, researchers frequently propose a set of model candidates from which the best alternative must be selected. Here we consider this problem of model selection and formulate it as a simultaneous model selection and parameter identification problem. More precisely, we consider a general mixed-integer nonlinear programming (MINLP) formulation for model selection and identification, with emphasis on dynamic models consisting of sets of either ODEs (ordinary differential equations) or DAEs (differential algebraic equations). Results We solved the MINLP formulation for model selection and identification using an algorithm based on Scatter Search (SS). We illustrate the capabilities and efficiency of the proposed strategy with a case study considering the KdpD/KdpE system regulating potassium homeostasis in Escherichia coli. The proposed approach resulted in a final model that presents a better fit to the in silico generated experimental data. Conclusions The presented MINLP-based optimization approach for nested-model selection and identification is a powerful methodology for model development in systems biology. This strategy can be used to perform model selection and parameter estimation in one single step, thus greatly reducing the number of experiments and computations of traditional modeling approaches. PMID:23938131

  5. Hard-coded parameters have the largest impact on fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Wulfmeyer, V.; Attinger, S.; Thober, S.

    2015-12-01

    Land surface models incorporate a large number of processes, described by physical and empirical equations. The agilityof the models to react to different meteorological conditions is artificially constrained by having hard-codedparameters in their equations. The land surface model Noah with multiple process options (Noah-MP) is one of the standard land surface schemes in WRFand gives the flexibility to experiment with several model parameterizations of biophysical and hydrologicalprocesses. The model has around 80 parameters per plant functional type or soil class, which are given in tabulatedform and which can be adjusted. Here we looked into the model code in considerable detail and found another 140hard-coded values in all parameterizations, called hidden parameters here, of which around 50-60 are active inspecific combinations of the process options. We quantify global parametric sensitivities (SI) for the traditional and the hidden parameters for five model outputsin 12 MOPEX catchments of very different local hydro-meteorologies. Outputs are photosynthesis, transpiration,latent heat, surface and underground runoff. Photosynthesis is mostly sensitive to parameters describing plant physiology. Its second largest SI is for a hiddenparameter that partitions incoming into direct and diffuse radiation. Transpiration shows very similar SI asphotosynthesis. The SI of latent heat are, however, very different to transpiration. Its largest SI is observed for ahidden parameter in the formulation of soil surface resistance, due to low transpiration in Noah-MP. Surface runoff ismostly sensitive to soil and infiltration parameters. But it is also sensitive to almost all hidden snow parameters,which are about 40% of all hidden parameters. The largest SI of surface runoff is to the albedo of fresh snow and thesecond largest to the thermal conductivity of snow. Sensitive parameters for underground runoff, finally, are a mixtureof those of latent heat and surface runoff. In

  6. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    NASA Astrophysics Data System (ADS)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-01

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  7. Observation model and parameter partials for the JPL geodetic GPS modeling software GPSOMC

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Border, J. S.

    1988-01-01

    The physical models employed in GPSOMC and the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities in the current report with their counterparts in the computer programs. There are no basic model revisions, with the exceptions of an improved ocean loading model and some new options for handling clock parametrization. Such misprints as were discovered were corrected. Further revisions include modeling improvements and assurances that the model description is in accord with the current software.

  8. Model inversion by parameter fit using NN emulating the forward model: evaluation of indirect measurements.

    PubMed

    Schiller, Helmut

    2007-05-01

    The usage of inverse models to derive parameters of interest from measurements is widespread in science and technology. The operational usage of many inverse models became feasible just by emulation of the inverse model via a neural net (NN). This paper shows how NNs can be used to improve inversion accuracy by minimizing the sum of error squares. The procedure is very fast as it takes advantage of the Jacobian which is a byproduct of the NN calculation. An example from remote sensing is shown. It is also possible to take into account a non-diagonal covariance matrix of the measurement to derive the covariance matrix of the retrieved parameters.

  9. Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters

    NASA Astrophysics Data System (ADS)

    Caraballo, R.

    2016-11-01

    According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.

  10. Observation model and parameter partials for the JPL geodetic (GPS) modeling software 'GPSOMC'

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1990-01-01

    The physical models employed in GPSOMC, the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities with their counterparts in the computer programs. The present version is the second revision of the original document which it supersedes. The modeling is expanded to provide the option of using Cartesian station coordinates; parameters for the time rates of change of universal time and polar motion are also introduced.

  11. Roughness parameter optimization using Land Parameter Retrieval Model and Soil Moisture Deficit: Implementation using SMOS brightness temperatures

    NASA Astrophysics Data System (ADS)

    Srivastava, Prashant K.; O'Neill, Peggy; Han, Dawei; Rico-Ramirez, Miguel A.; Petropoulos, George P.; Islam, Tanvir; Gupta, Manika

    2015-04-01

    Roughness parameterization is necessary for nearly all soil moisture retrieval algorithms such as single or dual channel algorithms, L-band Microwave Emission of Biosphere (LMEB), Land Parameter Retrieval Model (LPRM), etc. At present, roughness parameters can be obtained either by field experiments, although obtaining field measurements all over the globe is nearly impossible, or by using a land cover-based look up table, which is not always accurate everywhere for individual fields. From a catalogue of models available in the technical literature domain, the LPRM model was used here because of its robust nature and applicability to a wide range of frequencies. LPRM needs several parameters for soil moisture retrieval -- in particular, roughness parameters (h and Q) are important for calculating reflectivity. In this study, the h and Q parameters are optimized using the soil moisture deficit (SMD) estimated from the probability distributed model (PDM) and Soil Moisture and Ocean Salinity (SMOS) brightness temperatures following the Levenberg-Marquardt (LM) algorithm over the Brue catchment, Southwest of England, U.K.. The catchment is predominantly a pasture land with moderate topography. The PDM-based SMD is used as it is calibrated and validated using locally available ground-based information, suitable for large scale areas such as catchments. The optimal h and Q parameters are determined by maximizing the correlation between SMD and LPRM retrieved soil moisture. After optimization the values of h and Q have been found to be 0.32 and 0.15, respectively. For testing the usefulness of the estimated roughness parameters, a separate set of SMOS datasets are taken into account for soil moisture retrieval using the LPRM model and optimized roughness parameters. The overall analysis indicates a satisfactory result when compared against the SMD information. This work provides quantitative values of roughness parameters suitable for large scale applications. The

  12. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1991-01-01

    A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.

  13. Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar D.; Cravey, Robin L.

    2002-01-01

    A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.

  14. A Note on the Item Information Function of the Four-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  15. Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES models. I. Parameter accuracy and benchmark stars

    NASA Astrophysics Data System (ADS)

    Passegger, V. M.; Wende-von Berg, S.; Reiners, A.

    2016-03-01

    M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used χ2-based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in Teff, log g, and [Fe/H] resulting from degeneracies between parameters and from shortcomings of the model atmospheres. The inherent uncertainties we find are σTeff = 35 K, σlog g = 0.14, and σ[Fe/H] = 0.11. The new model spectra achieve a reliable match to our observed data; our results for Teff and log g are consistent with literature values to within 1σ. However, metallicities reported from earlier photometric and spectroscopic calibrations in some cases disagree with our results by more than 3σ. A possible explanation are systematic errors in earlier metallicity determinations that were based on insufficient descriptions of the cool atmospheres. At this point, however, we cannot definitely identify the reason for this discrepancy, but our analysis indicates that there is a large uncertainty in the accuracy of M-dwarf parameter estimates. Based on observations carried out with UVES at ESO VLT.

  16. NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia

    NASA Astrophysics Data System (ADS)

    Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas

    2016-04-01

    Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.

  17. Stochastic modelling of daily rainfall in Nigeria: intra-annual variation of model parameters

    NASA Astrophysics Data System (ADS)

    Jimoh, O. D.; Webster, P.

    1999-09-01

    A Markov model of order 1 may be used to describe the occurrence of wet and dry days in Nigeria. Such models feature two parameter sets; P01 to characterise the probability of a wet day following a dry day and P11 to characterise the probability of a wet day following a wet day. The model parameter sets, when estimated from historical records, are characterised by a distinctive seasonal behaviour. However, the comparison of this seasonal behaviour between rainfall stations is hampered by the noise reflecting the high variability of parameters on successive days. The first part of this article is concerned with methods for smoothing these inherently noisy parameter sets. Smoothing has been approached using Fourier series, averaging techniques, or a combination thereof. It has been found that different methods generally perform well with respect to estimation of the average number of wet events and the frequency duration curves of wet and dry events. Parameterisation of the P01 parameter set is more successful than the P11 in view of the relatively small number of wet events lasting two or more days. The second part of the article is concerned with describing the regional variation in smoothed parameter sets. There is a systematic variation in the P01 parameter set as one moves northwards. In contrast, there is limited regional variation in the P11 set. Although this regional variation in P01 appears to be related to the gradual movement of the Inter Tropical Convergence Zone, the contrasting behaviour of the two parameter sets is difficult to explain on physical grounds.

  18. Dynamic hydrologic modeling using the zero-parameter Budyko model with instantaneous dryness index

    NASA Astrophysics Data System (ADS)

    Biswal, Basudev

    2016-09-01

    Long-term partitioning of hydrologic quantities is achieved by using the zero-parameter Budyko model which defines a dryness index. However, this approach is not suitable for dynamic partitioning particularly at diminishing timescales, and therefore, a universally applicable zero-parameter model remains elusive. Here an instantaneous dryness index is proposed which enables dynamic hydrologic modeling using the Budyko model. By introducing a "decay function" that characterizes the effects of antecedent rainfall and solar energy on the dryness state of a basin at a time, I propose the concept of instantaneous dryness index and use the Budyko function to perform continuous hydrologic partitioning. Using the same decay function, I then obtain discharge time series from the effective rainfall time series. The model is evaluated by considering data form 63 U.S. Geological Survey basins. Results indicate the possibility of using the proposed framework as an alternative platform for prediction in ungagued basins.

  19. Application of a parameter-estimation technique to modeling the regional aquifer underlying the eastern Snake River plain, Idaho

    USGS Publications Warehouse

    Garabedian, Stephen P.

    1986-01-01

    A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from

  20. Ecosystem Modeling of College Drinking: Parameter Estimation and Comparing Models to Data*

    PubMed Central

    Ackleh, Azmy S.; Fitzpatrick, Ben G.; Scribner, Richard; Simonsen, Neal; Thibodeaux, Jeremy J.

    2009-01-01

    Recently we developed a model composed of five impulsive differential equations that describes the changes in drinking patterns (that persist at epidemic level) amongst college students. Many of the model parameters cannot be measured directly from data; thus, an inverse problem approach, which chooses the set of parameters that results in the “best” model to data fit, is crucial for using this model as a predictive tool. The purpose of this paper is to present the procedure and results of an unconventional approach to parameter estimation that we developed after more common approaches were unsuccessful for our specific problem. The results show that our model provides a good fit to survey data for 32 campuses. Using these parameter estimates, we examined the effect of two hypothetical intervention policies: 1) reducing environmental wetness, and 2) penalizing students who are caught drinking. The results suggest that reducing campus wetness may be a very effective way of reducing heavy episodic (binge) drinking on a college campus, while a policy that penalizes students who drink is not nearly as effective. PMID:20161275

  1. Constrained optimisation of the parameters for a simple isostatic Moho model

    NASA Astrophysics Data System (ADS)

    Lane, R. J.

    2010-12-01

    of elevation / bathymetry values (H), Moho depth observation values from the seismic refraction soundings (Tm), the water density value (RHOw), and prior estimates and bounds for the output parameters. A number of different deterministic and stochastic inversion methods were used to derive solutions for the optimisation, enabling an evaluation of the uncertainty and sensitivity of the posterior estimates to be carried out. The output parameters that provided the scaling and vertical positioning of an isostatic model Moho surface that best fitted the seismic refraction Moho depths were found to be in general accord with parameters chosen by others when working in similar geological environments. A reasonable match between the Moho surfaces defined from seismic refraction and isostatic methods suggested that the use of an isostatic model assumption was valid in this instance. Further, the gravity response of the 3D geological map was found to match the observed gravity data after making relatively minor adjustments to the geometry of the Moho surface and the upper crustal basin thicknesses. It was thus concluded that the integrated regional 3D geological understanding of the upper crustal and Moho surfaces, and the related mass density contrasts across these units, was consistent with the observed gravity data.

  2. Structural modelling and control design under incomplete parameter information: The maximum-entropy approach

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1983-01-01

    A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).

  3. Long wave atmospheric noise model, phase 1. Volume 2: Mode parameters

    NASA Astrophysics Data System (ADS)

    Warber, Chris R.

    1989-04-01

    The full wave propagation code is used to calculate waveguide mode parameters in spread debris environments in order to develop a long wave atmospheric noise model. The parameters are stored for retrieval whenever the model is exercised. Because the noise-model data encompass parameters of all significant modes for a wide range of ground conductivities, frequencies, and nuclear environment intensities, graphs of those parameters are presented in this volume handbook format.

  4. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  5. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    NASA Astrophysics Data System (ADS)

    Bastidas, Luis A.; Knighton, James; Kline, Shaun W.

    2016-09-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  6. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Knighton, J.; Kline, S. W.

    2015-10-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  7. The Effects on Parameter Estimation of Correlated Abilities Using a Two-Dimensional, Two-Parameter Logistic Item Response Model.

    ERIC Educational Resources Information Center

    Batley, Rose-Marie; Boss, Marvin W.

    The effects of correlated dimensions on parameter estimation were assessed, using a two-dimensional item response theory model. Past research has shown the inadequacies of the unidimensional analysis of multidimensional item response data. However, few studies have reported multidimensional analysis of multidimensional data, and, in those using…

  8. A limit-cycle model of leg movements in cross-country skiing and its adjustments with fatigue.

    PubMed

    Cignetti, F; Schena, F; Mottet, D; Rouard, A

    2010-08-01

    Using dynamical modeling tools, the aim of the study was to establish a minimal model reproducing leg movements in cross-country skiing, and to evaluate the eventual adjustments of this model with fatigue. The participants (N=8) skied on a treadmill at 90% of their maximal oxygen consumption, up to exhaustion, using the diagonal stride technique. Qualitative analysis of leg kinematics portrayed in phase planes, Hooke planes, and velocity profiles suggested the inclusion in the model of a linear stiffness and an asymmetric van der Pol-type nonlinear damping. Quantitative analysis revealed that this model reproduced the observed kinematics patterns of the leg with adequacy, accounting for 87% of the variance. A rising influence of the stiffness term and a dropping influence of the damping terms were also evidenced with fatigue. The meaning of these changes was discussed in the framework of motor control.

  9. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved. PMID:20329520

  10. A Paradox between IRT Invariance and Model-Data Fit When Utilizing the One-Parameter and Three-Parameter Models

    ERIC Educational Resources Information Center

    Custer, Michael; Sharairi, Sid; Yamazaki, Kenji; Signatur, Diane; Swift, David; Frey, Sharon

    2008-01-01

    The present study compared item and ability invariance as well as model-data fit between the one-parameter (1PL) and three-parameter (3PL) Item Response Theory (IRT) models utilizing real data across five grades; second through sixth as well as simulated data at second, fourth and sixth grade. At each grade, the 1PL and 3PL IRT models were run…

  11. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    NASA Astrophysics Data System (ADS)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-11-01

    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  12. Kinetic modeling of molecular motors: pause model and parameter determination from single-molecule experiments

    NASA Astrophysics Data System (ADS)

    Morin, José A.; Ibarra, Borja; Cao, Francisco J.

    2016-05-01

    Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model

  13. Correction of biased climate simulated by biased physics through parameter estimation in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Xuefeng; Zhang, Shaoqing; Liu, Zhengyu; Wu, Xinrong; Han, Guijun

    2016-09-01

    Imperfect physical parameterization schemes are an important source of model bias in a coupled model and adversely impact the performance of model simulation. With a coupled ocean-atmosphere-land model of intermediate complexity, the impact of imperfect parameter estimation on model simulation with biased physics has been studied. Here, the biased physics is induced by using different outgoing longwave radiation schemes in the assimilation and "truth" models. To mitigate model bias, the parameters employed in the biased longwave radiation scheme are optimized using three different methods: least-squares parameter fitting (LSPF), single-valued parameter estimation and geography-dependent parameter optimization (GPO), the last two of which belong to the coupled model parameter estimation (CMPE) method. While the traditional LSPF method is able to improve the performance of coupled model simulations, the optimized parameter values from the CMPE, which uses the coupled model dynamics to project observational information onto the parameters, further reduce the bias of the simulated climate arising from biased physics. Further, parameters estimated by the GPO method can properly capture the climate-scale signal to improve the simulation of climate variability. These results suggest that the physical parameter estimation via the CMPE scheme is an effective approach to restrain the model climate drift during decadal climate predictions using coupled general circulation models.

  14. History matching for exploring and reducing climate model parameter space using observations and a large perturbed physics ensemble

    NASA Astrophysics Data System (ADS)

    Williamson, Daniel; Goldstein, Michael; Allison, Lesley; Blaker, Adam; Challenor, Peter; Jackson, Laura; Yamazaki, Kuniko

    2013-10-01

    We apply an established statistical methodology called history matching to constrain the parameter space of a coupled non-flux-adjusted climate model (the third Hadley Centre Climate Model; HadCM3) by using a 10,000-member perturbed physics ensemble and observational metrics. History matching uses emulators (fast statistical representations of climate models that include a measure of uncertainty in the prediction of climate model output) to rule out regions of the parameter space of the climate model that are inconsistent with physical observations given the relevant uncertainties. Our methods rule out about half of the parameter space of the climate model even though we only use a small number of historical observations. We explore 2 dimensional projections of the remaining space and observe a region whose shape mainly depends on parameters controlling cloud processes and one ocean mixing parameter. We find that global mean surface air temperature (SAT) is the dominant constraint of those used, and that the others provide little further constraint after matching to SAT. The Atlantic meridional overturning circulation (AMOC) has a non linear relationship with SAT and is not a good proxy for the meridional heat transport in the unconstrained parameter space, but these relationships are linear in our reduced space. We find that the transient response of the AMOC to idealised CO2 forcing at 1 and 2 % per year shows a greater average reduction in strength in the constrained parameter space than in the unconstrained space. We test extended ranges of a number of parameters of HadCM3 and discover that no part of the extended ranges can by ruled out using any of our constraints. Constraining parameter space using easy to emulate observational metrics prior to analysis of more complex processes is an important and powerful tool. It can remove complex and irrelevant behaviour in unrealistic parts of parameter space, allowing the processes in question to be more easily

  15. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design: part II. Model application.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    A new stochastic optimization model under modeling uncertainty (SOMUM) and parameter certainty is applied to a practical site located in western Canada. Various groundwater remediation strategies under different significance levels are obtained from the SOMUM model. The impact of modeling uncertainty (proxy-simulator residuals) on optimal remediation strategies is compared to that of parameter uncertainty (arising from physical properties). The results show that the increased remediation cost for mitigating modeling-uncertainty impact would be higher than those from models where the coefficient of variance of input parameters approximates to 40%. This provides new evidence that the modeling uncertainty in proxy-simulator residuals can hardly be ignored; there is thus a need of investigating and mitigating the impact of such uncertainties on groundwater remediation design. This work would be helpful for lowering the risk of system failure due to potential environmental-standard violation when determining optimal groundwater remediation strategies.

  16. Modeling Nonlinear Adsorption to Carbon with a Single Chemical Parameter: A Lognormal Langmuir Isotherm.

    PubMed

    Davis, Craig Warren; Di Toro, Dominic M

    2015-07-01

    Predictive models for linear sorption of solutes onto various media, e.g., soil organic carbon, are well-established; however, methods for predicting parameters for nonlinear isotherm models, e.g., Freundlich and Langmuir models, are not. Predicting nonlinear partition coefficients is complicated by the number of model parameters to fit n isotherms (e.g., Freundlich (2n) or Polanyi-Manes (3n)). The purpose of this paper is to present a nonlinear adsorption model with only one chemically specific parameter. To accomplish this, several simplifications to a log-normal Langmuir (LNL) isotherm model with 3n parameters were explored. A single sorbate-specific binding constant, the median Langmuir binding constant, and two global sorbent parameters; the total site density and the standard deviation of the Langmuir binding constant were employed. This single-solute specific (ss-LNL) model (2 + n parameters) was demonstrated to fit adsorption data as well as the 2n parameter Freundlich model. The LNL isotherm model is fit to four data sets composed of various chemicals sorbed to graphite, charcoal, and activated carbon. The RMS errors for the 3-, 2-, and 1-chemical specific parameter models were 0.066, 0.068, 0.069, and 0.113, respectively. The median logarithmic parameter standard errors for the four models were 1.070, 0.4537, 0.382, and 0.201 respectively. Further, the single-parameter model was the only model for which there were no standard errors of estimated parameters greater than a factor of 3 (0.50 log units). The surprising result is that very little decrease in RMSE occurs when two of the three parameters, σκ and qmax, are sorbate independent. However, the large standard errors present in the other models are significantly reduced. This remarkable simplification yields the single sorbate-specific parameter (ss-LNL) model. PMID:26035092

  17. Modeling Nonlinear Adsorption to Carbon with a Single Chemical Parameter: A Lognormal Langmuir Isotherm.

    PubMed

    Davis, Craig Warren; Di Toro, Dominic M

    2015-07-01

    Predictive models for linear sorption of solutes onto various media, e.g., soil organic carbon, are well-established; however, methods for predicting parameters for nonlinear isotherm models, e.g., Freundlich and Langmuir models, are not. Predicting nonlinear partition coefficients is complicated by the number of model parameters to fit n isotherms (e.g., Freundlich (2n) or Polanyi-Manes (3n)). The purpose of this paper is to present a nonlinear adsorption model with only one chemically specific parameter. To accomplish this, several simplifications to a log-normal Langmuir (LNL) isotherm model with 3n parameters were explored. A single sorbate-specific binding constant, the median Langmuir binding constant, and two global sorbent parameters; the total site density and the standard deviation of the Langmuir binding constant were employed. This single-solute specific (ss-LNL) model (2 + n parameters) was demonstrated to fit adsorption data as well as the 2n parameter Freundlich model. The LNL isotherm model is fit to four data sets composed of various chemicals sorbed to graphite, charcoal, and activated carbon. The RMS errors for the 3-, 2-, and 1-chemical specific parameter models were 0.066, 0.068, 0.069, and 0.113, respectively. The median logarithmic parameter standard errors for the four models were 1.070, 0.4537, 0.382, and 0.201 respectively. Further, the single-parameter model was the only model for which there were no standard errors of estimated parameters greater than a factor of 3 (0.50 log units). The surprising result is that very little decrease in RMSE occurs when two of the three parameters, σκ and qmax, are sorbate independent. However, the large standard errors present in the other models are significantly reduced. This remarkable simplification yields the single sorbate-specific parameter (ss-LNL) model.

  18. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    NASA Technical Reports Server (NTRS)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  19. A new genetic fuzzy system approach for parameter estimation of ARIMA model

    NASA Astrophysics Data System (ADS)

    Hassan, Saima; Jaafar, Jafreezal; Belhaouari, Brahim S.; Khosravi, Abbas

    2012-09-01

    The Autoregressive Integrated moving Average model is the most powerful and practical time series model for forecasting. Parameter estimation is the most crucial part in ARIMA modeling. Inaccurate and wrong estimated parameters lead to bias and unacceptable forecasting results. Parameter optimization can be adopted in order to increase the demand forecasting accuracy. A paradigm of the fuzzy system and a genetic algorithm is proposed in this paper as a parameter estimation approach for ARIMA. The new approach will optimize the parameters by tuning the fuzzy membership functions with a genetic algorithm. The proposed Hybrid model of ARIMA and the genetic fuzzy system will yield acceptable forecasting results.

  20. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  1. Distributed parameter modelling of flexible spacecraft: Where's the beef?

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1994-01-01

    This presentation discusses various misgivings concerning the directions and productivity of Distributed Parameter System (DPS) theory as applied to spacecraft vibration control. We try to show the need for greater cross-fertilization between DPS theorists and spacecraft control designers. We recommend a shift in research directions toward exploration of asymptotic frequency response characteristics of critical importance to control designers.

  2. Impact of kinetic parameters on heat transfer modeling for a pultrusion process

    NASA Astrophysics Data System (ADS)

    Gorthala, R.; Roux, J. A.; Vaughan, J. G.; Donti, R. P.; Hassouneh, A.

    An examination is conducted of pultrusion heat model predictions for various parameters of resin chemical kinetics; these parameters' values affect model heat-transfer results and model predictions. Attention is given to the applicability of DSC kinetic parameters to resin cure modeling, by comparing the predicted product cure temperature profiles and resin degree-of-cure values with pultrusion experiment results obtained for both carbon and glass reinforcements, different pull speeds and fiber volumes, and various die temperature profiles.

  3. An analytic solution to the Monod-Wyman-Changeux model and all parameters in this model.

    PubMed Central

    Zhou, G; Ho, P S; van Holde, K E

    1989-01-01

    Starting from the Monod-Wyman-Changeux (MWC) model (Monod, J., J. Wyman, and J. P. Changeux. 1965. J. Mol. Biol. 12:88-118), we obtain an analytical expression for the slope of the Hill plot at any ligand concentration. Furthermore, we derive an equation satisfied by the ligand concentration at the position of maximum slope. From these results, we derive a set of formulas which allow determination of the parameters of the MWC model (kR, C, and L) from the value of the Hill coefficient, nH, the ligand concentration at the position of maximum slope [( A]0), and the value of nu/(n-nu) at this point. We then outline procedures for utilizing these equations to provide a "best fit" of the MWC model to the experimental data, and to obtain a refined set of the parameters. Finally, we demonstrate the applicability of the technique by analysis of oxygen binding data for Octopus hemocyanin. PMID:2713440

  4. Using Dirichlet Priors to Improve Model Parameter Plausibility

    ERIC Educational Resources Information Center

    Rai, Dovan; Gong, Yue; Beck, Joseph E.

    2009-01-01

    Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…

  5. Parameter identification and calibration of the Xin'anjiang model using the surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Ye, Yan; Song, Xiaomeng; Zhang, Jianyun; Kong, Fanzhe; Ma, Guangwen

    2014-06-01

    Practical experience has demonstrated that single objective functions, no matter how carefully chosen, prove to be inadequate in providing proper measurements for all of the characteristics of the observed data. One strategy to circumvent this problem is to define multiple fitting criteria that measure different aspects of system behavior, and to use multi-criteria optimization to identify non-dominated optimal solutions. Unfortunately, these analyses require running original simulation models thousands of times. As such, they demand prohibitively large computational budgets. As a result, surrogate models have been used in combination with a variety of multi-objective optimization algorithms to approximate the true Pareto-front within limited evaluations for the original model. In this study, multi-objective optimization based on surrogate modeling (multivariate adaptive regression splines, MARS) for a conceptual rainfall-runoff model (Xin'anjiang model, XAJ) was proposed. Taking the Yanduhe basin of Three Gorges in the upper stream of the Yangtze River in China as a case study, three evaluation criteria were selected to quantify the goodness-of-fit of observations against calculated values from the simulation model. The three criteria chosen were the Nash-Sutcliffe efficiency coefficient, the relative error of peak flow, and runoff volume (REPF and RERV). The efficacy of this method is demonstrated on the calibration of the XAJ model. Compared to the single objective optimization results, it was indicated that the multi-objective optimization method can infer the most probable parameter set. The results also demonstrate that the use of surrogate-modeling enables optimization that is much more efficient; and the total computational cost is reduced by about 92.5%, compared to optimization without using surrogate modeling. The results obtained with the proposed method support the feasibility of applying parameter optimization to computationally intensive simulation

  6. Macroscopic control parameter for avalanche models for bursty transport

    SciTech Connect

    Chapman, S. C.; Rowlands, G.; Watkins, N. W.

    2009-01-15

    Similarity analysis is used to identify the control parameter R{sub A} for the subset of avalanching systems that can exhibit self-organized criticality (SOC). This parameter expresses the ratio of driving to dissipation. The transition to SOC, when the number of excited degrees of freedom is maximal, is found to occur when R{sub A}{yields}0. This is in the opposite sense to (Kolmogorov) turbulence, thus identifying a deep distinction between turbulence and SOC and suggesting an observable property that could distinguish them. A corollary of this similarity analysis is that SOC phenomenology, that is, power law scaling of avalanches, can persist for finite R{sub A} with the same R{sub A}{yields}0 exponent if the system supports a sufficiently large range of lengthscales, necessary for SOC to be a candidate for physical (R{sub A} finite) systems.

  7. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  8. Parameter Estimation for Differential Equation Models Using a Framework of Measurement Error in Regression Models

    PubMed Central

    Liang, Hua

    2008-01-01

    Differential equation (DE) models are widely used in many scientific fields that include engineering, physics and biomedical sciences. The so-called “forward problem”, the problem of simulations and predictions of state variables for given parameter values in the DE models, has been extensively studied by mathematicians, physicists, engineers and other scientists. However, the “inverse problem”, the problem of parameter estimation based on the measurements of output variables, has not been well explored using modern statistical methods, although some least squares-based approaches have been proposed and studied. In this paper, we propose parameter estimation methods for ordinary differential equation models (ODE) based on the local smoothing approach and a pseudo-least squares (PsLS) principle under a framework of measurement error in regression models. The asymptotic properties of the proposed PsLS estimator are established. We also compare the PsLS method to the corresponding SIMEX method and evaluate their finite sample performances via simulation studies. We illustrate the proposed approach using an application example from an HIV dynamic study. PMID:19956350

  9. Target Rotations and Assessing the Impact of Model Violations on the Parameters of Unidimensional Item Response Theory Models

    ERIC Educational Resources Information Center

    Reise, Steven; Moore, Tyler; Maydeu-Olivares, Alberto

    2011-01-01

    Reise, Cook, and Moore proposed a "comparison modeling" approach to assess the distortion in item parameter estimates when a unidimensional item response theory (IRT) model is imposed on multidimensional data. Central to their approach is the comparison of item slope parameter estimates from a unidimensional IRT model (a restricted model), with…

  10. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    PubMed

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å.

  11. Dynamic Modeling of Adjustable-Speed Pumped Storage Hydropower Plant: Preprint

    SciTech Connect

    Muljadi, E.; Singh, M.; Gevorgian, V.; Mohanpurkar, M.; Havsapian, R.; Koritarov, V.

    2015-04-06

    Hydropower is the largest producer of renewable energy in the U.S. More than 60% of the total renewable generation comes from hydropower. There is also approximately 22 GW of pumped storage hydropower (PSH). Conventional PSH uses a synchronous generator, and thus the rotational speed is constant at synchronous speed. This work details a hydrodynamic model and generator/power converter dynamic model. The optimization of the hydrodynamic model is executed by the hydro-turbine controller, and the electrical output real/reactive power is controlled by the power converter. All essential controllers to perform grid-interface functions and provide ancillary services are included in the model.

  12. Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…

  13. Identification of the 1PL Model with Guessing Parameter: Parametric and Semi-Parametric Results

    ERIC Educational Resources Information Center

    San Martin, Ernesto; Rolin, Jean-Marie; Castro, Luis M.

    2013-01-01

    In this paper, we study the identification of a particular case of the 3PL model, namely when the discrimination parameters are all constant and equal to 1. We term this model, 1PL-G model. The identification analysis is performed under three different specifications. The first specification considers the abilities as unknown parameters. It is…

  14. A General Approach for Specifying Informative Prior Distributions for PBPK Model Parameters

    EPA Science Inventory

    Characterization of uncertainty in model predictions is receiving more interest as more models are being used in applications that are critical to human health. For models in which parameters reflect biological characteristics, it is often possible to provide estimates of paramet...

  15. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  16. Estimation of MIMIC Model Parameters with Multilevel Data

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2011-01-01

    The purpose of this simulation study was to assess the performance of latent variable models that take into account the complex sampling mechanism that often underlies data used in educational, psychological, and other social science research. Analyses were conducted using the multiple indicator multiple cause (MIMIC) model, which is a flexible…

  17. Relating Data and Models to Characterize Parameter and Prediction Uncertainty

    EPA Science Inventory

    Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...

  18. Parameter Variability and Distributional Assumptions in the Diffusion Model

    ERIC Educational Resources Information Center

    Ratcliff, Roger

    2013-01-01

    If the diffusion model (Ratcliff & McKoon, 2008) is to account for the relative speeds of correct responses and errors, it is necessary that the components of processing identified by the model vary across the trials of a task. In standard applications, the rate at which information is accumulated by the diffusion process is assumed to be normally…

  19. Model parameter uncertainty analysis for an annual field-scale P loss model

    NASA Astrophysics Data System (ADS)

    Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie

    2016-08-01

    Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model

  20. A self-adjusting flow dependent formulation for the classical Smagorinsky model coefficient

    NASA Astrophysics Data System (ADS)

    Ghorbaniasl, G.; Agnihotri, V.; Lacor, C.

    2013-05-01

    In this paper, we propose an efficient formula for estimating the model coefficient of a Smagorinsky model based subgrid scale eddy viscosity. The method allows vanishing eddy viscosity through a vanishing model coefficient in regions where the eddy viscosity should be zero. The advantage of this method is that the coefficient of the subgrid scale model is a function of the flow solution, including the translational and the rotational velocity field contributions. Furthermore, the value of model coefficient is optimized without using the dynamic procedure thereby saving significantly on computational cost. In addition, the method guarantees the model coefficient to be always positive with low fluctuation in space and time. For validation purposes, three test cases are chosen: (i) a fully developed channel flow at {mathopRenolimits} _tau = 180, 395, (ii) a fully developed flow through a rectangular duct of square cross section at {mathopRenolimits} _tau = 300, and (iii) a smooth subcritical flow past a stationary circular cylinder, at a Reynolds number of {mathopRenolimits} = 3900, where the wake is fully turbulent but the cylinder boundary layers remain laminar. A main outcome is the good behavior of the proposed model as compared to reference data. We have also applied the proposed method to a CT-based simplified human upper airway model, where the flow is transient.

  1. The Analysis of Repeated Measurements with Mixed-Model Adjusted "F" Tests

    ERIC Educational Resources Information Center

    Kowalchuk, Rhonda K.; Keselman, H. J.; Algina, James; Wolfinger, Russell D.

    2004-01-01

    One approach to the analysis of repeated measures data allows researchers to model the covariance structure of their data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach, available through SAS PROC MIXED, was compared to a Welch-James type statistic.…

  2. Covariate Measurement Error Adjustment for Multilevel Models with Application to Educational Data

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero; Gori, Enrico

    2011-01-01

    This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…

  3. Toward a Transactional Model of Parent-Adolescent Relationship Quality and Adolescent Psychological Adjustment

    ERIC Educational Resources Information Center

    Fanti, Kostas A.; Henrich, Christopher C.; Brookmeyer, Kathryn A.; Kuperminc, Gabriel P.

    2008-01-01

    The present study includes externalizing problems, internalizing problems, mother-adolescent relationship quality, and father-adolescent relationship quality in the same structural equation model and tests the longitudinal reciprocal association among all four variables over a 1-year period. A transactional model in which adolescents'…

  4. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

    PubMed Central

    Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende

    2015-01-01

    This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103

  5. Simultaneous parameters identifiability and estimation of an E. coli metabolic network model.

    PubMed

    Pontes Freitas Alberton, Kese; Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende

    2015-01-01

    This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103

  6. The timing of the Black Sea flood event: Insights from modeling of glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Goldberg, Samuel L.; Lau, Harriet C. P.; Mitrovica, Jerry X.; Latychev, Konstantin

    2016-10-01

    We present a suite of gravitationally self-consistent predictions of sea-level change since Last Glacial Maximum (LGM) in the vicinity of the Bosphorus and Dardanelles straits that combine signals associated with glacial isostatic adjustment (GIA) and the flooding of the Black Sea. Our predictions are tuned to fit a relative sea level (RSL) record at the island of Samothrace in the north Aegean Sea and they include realistic 3-D variations in viscoelastic structure, including lateral variations in mantle viscosity and the elastic thickness of the lithosphere, as well as weak plate boundary zones. We demonstrate that 3-D Earth structure and the magnitude of the flood event (which depends on the pre-flood level of the lake) both have significant impact on the predicted RSL change at the location of the Bosphorus sill, and therefore on the inferred timing of the marine incursion. We summarize our results in a plot showing the predicted RSL change at the Bosphorus sill as a function of the timing of the flood event for different flood magnitudes up to 100 m. These results suggest, for example, that a flood event at 9 ka implies that the elevation of the sill was lowered through erosion by ∼14-21 m during, and after, the flood. In contrast, a flood event at 7 ka suggests erosion of ∼24-31 m at the sill since the flood. More generally, our results will be useful for future research aimed at constraining the details of this controversial, and widely debated geological event.

  7. Continuous modeling of metabolic networks with gene regulation in yeast and in vivo determination of rate parameters.

    PubMed

    Moisset, P; Vaisman, D; Cintolesi, A; Urrutia, J; Rapaport, I; Andrews, B A; Asenjo, J A

    2012-09-01

    A continuous model of a metabolic network including gene regulation to simulate metabolic fluxes during batch cultivation of yeast Saccharomyces cerevisiae was developed. The metabolic network includes reactions of glycolysis, gluconeogenesis, glycerol and ethanol synthesis and consumption, the tricarboxylic acid cycle, and protein synthesis. Carbon sources considered were glucose and then ethanol synthesized during growth on glucose. The metabolic network has 39 fluxes, which represent the action of 50 enzymes and 64 genes and it is coupled with a gene regulation network which defines enzyme synthesis (activities) and incorporates regulation by glucose (enzyme induction and repression), modeled using ordinary differential equations. The model includes enzyme kinetics, equations that follow both mass-action law and transport as well as inducible, repressible, and constitutive enzymes of metabolism. The model was able to simulate a fermentation of S. cerevisiae during the exponential growth phase on glucose and the exponential growth phase on ethanol using only one set of kinetic parameters. All fluxes in the continuous model followed the behavior shown by the metabolic flux analysis (MFA) obtained from experimental results. The differences obtained between the fluxes given by the model and the fluxes determined by the MFA do not exceed 25% in 75% of the cases during exponential growth on glucose, and 20% in 90% of the cases during exponential growth on ethanol. Furthermore, the adjustment of the fermentation profiles of biomass, glucose, and ethanol were 95%, 95%, and 79%, respectively. With these results the simulation was considered successful. A comparison between the simulation of the continuous model and the experimental data of the diauxic yeast fermentation for glucose, biomass, and ethanol, shows an extremely good match using the parameters found. The small discrepancies between the fluxes obtained through MFA and those predicted by the differential

  8. Squares of different sizes: effect of geographical projection on model parameter estimates in species distribution modeling.

    PubMed

    Budic, Lara; Didenko, Gregor; Dormann, Carsten F

    2016-01-01

    In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be

  9. Squares of different sizes: effect of geographical projection on model parameter estimates in species distribution modeling.

    PubMed

    Budic, Lara; Didenko, Gregor; Dormann, Carsten F

    2016-01-01

    In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be

  10. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    PubMed

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. PMID:26017545

  11. A review of distributed parameter groundwater management modeling methods.

    USGS Publications Warehouse

    Gorelick, S.M.

    1983-01-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programming, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. -from Author

  12. On the Influence of Material Parameters in a Complex Material Model for Powder Compaction

    NASA Astrophysics Data System (ADS)

    Staf, Hjalmar; Lindskog, Per; Andersson, Daniel C.; Larsson, Per-Lennart

    2016-10-01

    Parameters in a complex material model for powder compaction, based on a continuum mechanics approach, are evaluated using real insert geometries. The parameter sensitivity with respect to density and stress after compaction, pertinent to a wide range of geometries, is studied in order to investigate completeness and limitations of the material model. Finite element simulations with varied material parameters are used to build surrogate models for the sensitivity study. The conclusion from this analysis is that a simplification of the material model is relevant, especially for simple insert geometries. Parameters linked to anisotropy and the plastic strain evolution angle have a small impact on the final result.

  13. On the Influence of Material Parameters in a Complex Material Model for Powder Compaction

    NASA Astrophysics Data System (ADS)

    Staf, Hjalmar; Lindskog, Per; Andersson, Daniel C.; Larsson, Per-Lennart

    2016-08-01

    Parameters in a complex material model for powder compaction, based on a continuum mechanics approach, are evaluated using real insert geometries. The parameter sensitivity with respect to density and stress after compaction, pertinent to a wide range of geometries, is studied in order to investigate completeness and limitations of the material model. Finite element simulations with varied material parameters are used to build surrogate models for the sensitivity study. The conclusion from this analysis is that a simplification of the material model is relevant, especially for simple insert geometries. Parameters linked to anisotropy and the plastic strain evolution angle have a small impact on the final result.

  14. From global to local: exploring the relationship between parameters and behaviors in models of electrical excitability.

    PubMed

    Fletcher, Patrick; Bertram, Richard; Tabak, Joel

    2016-06-01

    Models of electrical activity in excitable cells involve nonlinear interactions between many ionic currents. Changing parameters in these models can produce a variety of activity patterns with sometimes unexpected effects. Further more, introducing new currents will have different effects depending on the initial parameter set. In this study we combined global sampling of parameter space and local analysis of representative parameter sets in a pituitary cell model to understand the effects of adding K (+) conductances, which mediate some effects of hormone action on these cells. Global sampling ensured that the effects of introducing K (+) conductances were captured across a wide variety of contexts of model parameters. For each type of K (+) conductance we determined the types of behavioral transition that it evoked. Some transitions were counterintuitive, and may have been missed without the use of global sampling. In general, the wide range of transitions that occurred when the same current was applied to the model cell at different locations in parameter space highlight the challenge of making accurate model predictions in light of cell-to-cell heterogeneity. Finally, we used bifurcation analysis and fast/slow analysis to investigate why specific transitions occur in representative individual models. This approach relies on the use of a graphics processing unit (GPU) to quickly map parameter space to model behavior and identify parameter sets for further analysis. Acceleration with modern low-cost GPUs is particularly well suited to exploring the moderate-sized (5-20) parameter spaces of excitable cell and signaling models.

  15. When the Optimal Is Not the Best: Parameter Estimation in Complex Biological Models

    PubMed Central

    Fernández Slezak, Diego; Suárez, Cecilia; Cecchi, Guillermo A.; Marshall, Guillermo; Stolovitzky, Gustavo

    2010-01-01

    Background The vast computational resources that became available during the past decade enabled the development and simulation of increasingly complex mathematical models of cancer growth. These models typically involve many free parameters whose determination is a substantial obstacle to model development. Direct measurement of biochemical parameters in vivo is often difficult and sometimes impracticable, while fitting them under data-poor conditions may result in biologically implausible values. Results We discuss different methodological approaches to estimate parameters in complex biological models. We make use of the high computational power of the Blue Gene technology to perform an extensive study of the parameter space in a model of avascular tumor growth. We explicitly show that the landscape of the cost function used to optimize the model to the data has a very rugged surface in parameter space. This cost function has many local minima with unrealistic solutions, including the global minimum corresponding to the best fit. Conclusions The case studied in this paper shows one example in which model parameters that optimally fit the data are not necessarily the best ones from a biological point of view. To avoid force-fitting a model to a dataset, we propose that the best model parameters should be found by choosing, among suboptimal parameters, those that match criteria other than the ones used to fit the model. We also conclude that the model, data and optimization approach form a new complex system and point to the need of a theory that addresses this problem more generally. PMID:21049094

  16. Relaxation oscillation model of hemodynamic parameters in the cerebral vessels

    NASA Astrophysics Data System (ADS)

    Cherevko, A. A.; Mikhaylova, A. V.; Chupakhin, A. P.; Ufimtseva, I. V.; Krivoshapkin, A. L.; Orlov, K. Yu

    2016-06-01

    Simulation of a blood flow under normality as well as under pathology is extremely complex problem of great current interest both from the point of view of fundamental hydrodynamics, and for medical applications. This paper proposes a model of Van der Pol - Duffing nonlinear oscillator equation describing relaxation oscillations of a blood flow in the cerebral vessels. The model is based on the patient-specific clinical experimental data flow obtained during the neurosurgical operations in Meshalkin Novosibirsk Research Institute of Circulation Pathology. The stability of the model is demonstrated through the variations of initial data and coefficients. It is universal and describes pressure and velocity fluctuations in different cerebral vessels (arteries, veins, sinuses), as well as in a laboratory model of carotid bifurcation. Derived equation describes the rheology of the ”blood stream - elastic vessel wall gelatinous brain environment” composite system and represents the state equation of this complex environment.

  17. Tradeoffs among watershed model calibration targets for parameter estimation

    EPA Science Inventory

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...

  18. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary

  19. Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation

    NASA Astrophysics Data System (ADS)

    Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.

    2002-05-01

    This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.

  20. Parameter Optimization for the Gaussian Model of Folded Proteins

    NASA Astrophysics Data System (ADS)

    Erman, Burak; Erkip, Albert

    2000-03-01

    Recently, we proposed an analytical model of protein folding (B. Erman, K. A. Dill, J. Chem. Phys, 112, 000, 2000) and showed that this model successfully approximates the known minimum energy configurations of two dimensional HP chains. All attractions (covalent and non-covalent) as well as repulsions were treated as if the monomer units interacted with each other through linear spring forces. Since the governing potential of the linear springs are derived from a Gaussian potential, the model is called the ''Gaussian Model''. The predicted conformations from the model for the hexamer and various 9mer sequences all lie on the square lattice, although the model does not contain information about the lattice structure. Results of predictions for chains with 20 or more monomers also agreed well with corresponding known minimum energy lattice structures. However, these predicted conformations did not lie exactly on the square lattice. In the present work, we treat the specific problem of optimizing the potentials (the strengths of the spring constants) so that the predictions are in better agreement with the known minimum energy structures.

  1. Simultaneous estimation of model parameters and diffuse pollution sources for river water quality modeling.

    PubMed

    Jun, K S; Kang, J W; Lee, K S

    2007-01-01

    Diffuse pollution sources along a stream reach are very difficult to both monitor and estimate. In this paper, a systematic method using an optimal estimation algorithm is presented for simultaneous estimation of diffuse pollution and model parameters in a stream water quality model. It was applied with the QUAL2E model to the South Han River in South Korea for optimal estimation of kinetic constants and diffuse loads along the river. Initial calibration results for kinetic constants selected from a sensitivity analysis reveal that diffuse source inputs for nitrogen and phosphorus are essential to satisfy the system mass balance. Diffuse loads for total nitrogen and total phosphorus were estimated by solving the expanded inverse problem. Comparison of kinetic constants estimated simultaneously with diffuse sources to those estimated without diffuse loads, suggests that diffuse sources must be included in the optimization not only for its own estimation but also for adequate estimation of the model parameters. Application of the optimization method to river water quality modeling is discussed in terms of the sensitivity coefficient matrix structure.

  2. Ecohydrological model parameter selection for stream health evaluation.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  3. Ecohydrological model parameter selection for stream health evaluation.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  4. Parental Depressive Symptoms and Adolescent Adjustment: A Prospective Test of an Explanatory Model for the Role of Marital Conflict

    PubMed Central

    Cummings, E. Mark; Cheung, Rebecca Y. M.; Koss, Kalsea; Davies, Patrick T.

    2014-01-01

    Despite calls for process-oriented models for child maladjustment due to heightened marital conflict in the context of parental depressive symptoms, few longitudinal tests of the mechanisms underlying these relations have been conducted. Addressing this gap, the present study examined multiple factors longitudinally that link parental depressive symptoms to adolescent adjustment problems, building on a conceptual model informed by emotional security theory (EST). Participants were 320 families (158 boys, 162 girls), including mothers and fathers, who took part when their children were in kindergarten (T1), second (T2), seventh (T3), eighth (T4) and ninth (T5) grades. Parental depressive symptoms (T1) were related to changes in adolescents’ externalizing and internalizing symptoms (T5), as mediated by parents’ negative emotional expressiveness (T2), marital conflict (T3), and emotional insecurity (T4). Evidence was thus advanced for emotional insecurity as an explanatory process in the context of parental depressive symptoms. PMID:24652484

  5. Measuring the basic parameters of neutron stars using model atmospheres

    NASA Astrophysics Data System (ADS)

    Suleimanov, V. F.; Poutanen, J.; Klochkov, D.; Werner, K.

    2016-02-01

    Model spectra of neutron star atmospheres are nowadays widely used to fit the observed thermal X-ray spectra of neutron stars. This fitting is the key element in the method of the neutron star radius determination. Here, we present the basic assumptions used for the neutron star atmosphere modeling as well as the main qualitative features of the stellar atmospheres leading to the deviations of the emergent model spectrum from blackbody. We describe the properties of two of our model atmosphere grids: i) pure carbon atmospheres for relatively cool neutron stars (1-4MK) and ii) hot atmospheres with Compton scattering taken into account. The results obtained by applying these grids to model the X-ray spectra of the central compact object in supernova remnant HESS 1731-347, and two X-ray bursting neutron stars in low-mass X-ray binaries, 4U 1724-307 and 4U 1608-52, are presented. Possible systematic uncertainties associated with the obtained neutron star radii are discussed.

  6. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R. ); Chen, F.F.K. )

    1993-01-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical models.'' These component models are functionally integrated to represent the plant. With today's low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in single loop'' design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  7. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C is