Science.gov

Sample records for adjustable model parameters

  1. Tweaking Model Parameters: Manual Adjustment and Self Calibration

    NASA Astrophysics Data System (ADS)

    Schulz, B.; Tuffs, R. J.; Laureijs, R. J.; Lu, N.; Peschke, S. B.; Gabriel, C.; Khan, I.

    2002-12-01

    The reduction of P32 data is not always straight forward and the application of the transient model needs tight control by the user. This paper describes how to access the model parameters within the P32Tools software and how to work with the "Inspect signals per pixel" panel, in order to explore the parameter space and improve the model fit.

  2. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment

    PubMed Central

    Goeritz, Marie L.; Marder, Eve

    2014-01-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. PMID:25008414

  3. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    NASA Astrophysics Data System (ADS)

    Kutiev, Ivan; Marinov, Pencho; Fidanova, Stefka; Belehaki, Anna; Tsagouri, Ioanna

    2012-12-01

    Validation results on the latest version of TaD model (TaDv2) show realistic reconstruction of the electron density profiles (EDPs) with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  4. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    NASA Astrophysics Data System (ADS)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  5. Resonance Parameter Adjustment Based on Integral Experiments

    SciTech Connect

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; Forget, Benoit

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, such as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.

  6. Resonance Parameter Adjustment Based on Integral Experiments

    DOE PAGES

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less

  7. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  8. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  9. Sensitivity of adjustment to parameter correlations and to response-parameter correlations

    SciTech Connect

    Wagschal, J.J.

    2011-07-01

    The adjusted parameters and response, and their respective posterior uncertainties and correlations, are presented explicitly as functions of all relevant prior correlations for the two parameters, one response case. The dependence of these adjusted entities on the various prior correlations is analyzed and portrayed graphically for various valid correlation combinations on a simple criticality problem. (authors)

  10. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Prohibited controls, adjustable parameters. 94.205 Section 94.205 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM MARINE COMPRESSION-IGNITION ENGINES Certification...

  11. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Prohibited controls, adjustable parameters. 94.205 Section 94.205 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM MARINE COMPRESSION-IGNITION ENGINES Certification...

  12. Effect of Adjusting Pseudo-Guessing Parameter Estimates on Test Scaling When Item Parameter Drift Is Present

    ERIC Educational Resources Information Center

    Han, Kyung T.; Wells, Craig S.; Hambleton, Ronald K.

    2015-01-01

    In item response theory test scaling/equating with the three-parameter model, the scaling coefficients A and B have no impact on the c-parameter estimates of the test items since the cparameter estimates are not adjusted in the scaling/equating procedure. The main research question in this study concerned how serious the consequences would be if…

  13. Adjustment of Tsunami Source Parameters By Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Pires, C.; Miranda, P.

    Tsunami waveforms recorded at tide gauges can be used to adjust tsunami source pa- rameters and, indirectly, seismic focal parameters. Simple inversion methods, based on ray-tracing techniques, only used a small fraction of available information. More elab- orate techniques, based on the Green's functions methods, also have some limitations in their scope. A new methodology, using a variational approach, allows for a much more general inversion, which can directly optimize focal parameters of tsunamigenic earthquakes. Idealized synthetic data and an application to the 1969 Gorringe Earth- quake are used to validate the methodology.

  14. Impact of dose calculation models on radiotherapy outcomes and quality adjusted life years for lung cancer treatment: do we need to measure radiotherapy outcomes to tune the radiobiological parameters of a normal tissue complication probability model?

    PubMed Central

    Docquière, Nicolas; Bondiau, Pierre-Yves; Balosso, Jacques

    2016-01-01

    Background The equivalent uniform dose (EUD) radiobiological model can be applied for lung cancer treatment plans to estimate the tumor control probability (TCP) and the normal tissue complication probability (NTCP) using different dose calculation models. Then, based on the different calculated doses, the quality adjusted life years (QALY) score can be assessed versus the uncomplicated tumor control probability (UTCP) concept in order to predict the overall outcome of the different treatment plans. Methods Nine lung cancer cases were included in this study. For the each patient, two treatments plans were generated. The doses were calculated respectively from pencil beam model, as pencil beam convolution (PBC) turning on 1D density correction with Modified Batho’s (MB) method, and point kernel model as anisotropic analytical algorithm (AAA) using exactly the same prescribed dose, normalized to 100% at isocentre point inside the target and beam arrangements. The radiotherapy outcomes and QALY were compared. The bootstrap method was used to improve the 95% confidence intervals (95% CI) estimation. Wilcoxon paired test was used to calculate P value. Results Compared to AAA considered as more realistic, the PBCMB overestimated the TCP while underestimating NTCP, P<0.05. Thus the UTCP and the QALY score were also overestimated. Conclusions To correlate measured QALY’s obtained from the follow-up of the patients with calculated QALY from DVH metrics, the more accurate dose calculation models should be first integrated in clinical use. Second, clinically measured outcomes are necessary to tune the parameters of the NTCP model used to link the treatment outcome with the QALY. Only after these two steps, the comparison and the ranking of different radiotherapy plans would be possible, avoiding over/under estimation of QALY and any other clinic-biological estimates. PMID:28149761

  15. Europium Luminescence: Electronic Densities and Superdelocalizabilities for a Unique Adjustment of Theoretical Intensity Parameters

    PubMed Central

    Dutra, José Diogo L.; Lima, Nathalia B. D.; Freire, Ricardo O.; Simas, Alfredo M.

    2015-01-01

    We advance the concept that the charge factors of the simple overlap model and the polarizabilities of Judd-Ofelt theory for the luminescence of europium complexes can be effectively and uniquely modeled by perturbation theory on the semiempirical electronic wave function of the complex. With only three adjustable constants, we introduce expressions that relate: (i) the charge factors to electronic densities, and (ii) the polarizabilities to superdelocalizabilities that we derived specifically for this purpose. The three constants are then adjusted iteratively until the calculated intensity parameters, corresponding to the 5D0→7F2 and 5D0→7F4 transitions, converge to the experimentally determined ones. This adjustment yields a single unique set of only three constants per complex and semiempirical model used. From these constants, we then define a binary outcome acceptance attribute for the adjustment, and show that when the adjustment is acceptable, the predicted geometry is, in average, closer to the experimental one. An important consequence is that the terms of the intensity parameters related to dynamic coupling and electric dipole mechanisms will be unique. Hence, the important energy transfer rates will also be unique, leading to a single predicted intensity parameter for the 5D0→7F6 transition. PMID:26329420

  16. Dynamic adjustment of hidden node parameters for extreme learning machine.

    PubMed

    Feng, Guorui; Lan, Yuan; Zhang, Xinpeng; Qian, Zhenxing

    2015-02-01

    Extreme learning machine (ELM), proposed by Huang et al., was developed for generalized single hidden layer feedforward networks with a wide variety of hidden nodes. ELMs have been proved very fast and effective especially for solving function approximation problems with a predetermined network structure. However, it may contain insignificant hidden nodes. In this paper, we propose dynamic adjustment ELM (DA-ELM) that can further tune the input parameters of insignificant hidden nodes in order to reduce the residual error. It is proved in this paper that the energy error can be effectively reduced by applying recursive expectation-minimization theorem. In DA-ELM, the input parameters of insignificant hidden node are updated in the decreasing direction of the energy error in each step. The detailed theoretical foundation of DA-ELM is presented in this paper. Experimental results show that the proposed DA-ELM is more efficient than the state-of-art algorithms such as Bayesian ELM, optimally-pruned ELM, two-stage ELM, Levenberg-Marquardt, sensitivity-based linear learning method as well as the preliminary ELM.

  17. Zoom lens calibration with zoom- and focus-related intrinsic parameters applied to bundle adjustment

    NASA Astrophysics Data System (ADS)

    Zheng, Shunyi; Wang, Zheng; Huang, Rongyong

    2015-04-01

    A zoom lens is more flexible for photogrammetric measurements under diverse environments than a fixed lens. However, challenges in calibration of zoom-lens cameras preclude the wide use of zoom lenses in the field of close-range photogrammetry. Thus, a novel zoom lens calibration method is proposed in this study. In this method, instead of conducting modeling after monofocal calibrations, we summarize the empirical zoom/focus models of intrinsic parameters first and then incorporate these parameters into traditional collinearity equations to construct the fundamental mathematical model, i.e., collinearity equations with zoom- and focus-related intrinsic parameters. Similar to monofocal calibration, images taken at several combinations of zoom and focus settings are processed in a single self-calibration bundle adjustment. In the self-calibration bundle adjustment, three types of unknowns, namely, exterior orientation parameters, unknown space point coordinates, and model coefficients of the intrinsic parameters, are solved simultaneously. Experiments on three different digital cameras with zoom lenses support the feasibility of the proposed method, and their relative accuracies range from 1:4000 to 1:15,100. Furthermore, the nominal focal length written in the exchangeable image file header is found to lack reliability in experiments. Thereafter, the joint influence of zoom lens instability and zoom recording errors is further analyzed quantitatively. The analysis result is consistent with the experimental result and explains the reason why zoom lens calibration can never have the same accuracy as monofocal self-calibration.

  18. 40 CFR 94.205 - Prohibited controls, adjustable parameters.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... certification, selective enforcement audit, or in-use testing to determine compliance with the requirements of... necessary for proper operation of the engine. (e) Tier 1 Category 3 marine engines shall be adjusted... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Prohibited controls,...

  19. Determination and adjustment of drying parameters of Tunisian ceramic bodies

    NASA Astrophysics Data System (ADS)

    Mahmoudi, Salah; Bennour, Ali; Srasra, Ezzeddine; Zargouni, Fouad

    2016-12-01

    This work deals with the mineralogical, physico-chemical and geotechnical analyses of representative Aptian clays in the north-east of Tunisia. X-ray diffraction reveals a predominance of illite (50-60 wt%) associated with kaolinite and interstratified illite/smectite. The accessory minerals detected in raw materials are quartz, calcite and Na-feldspar. The average amounts of silica, alumina and alkalis are 52, 20 and 3.5 wt%, respectively. The contents of lime and iron vary between 4 and 8 wt %. The plasticity test shows medium values of plasticity index (16-28 wt%). The linear drying shrinkage is weak (less than 0.99 wt%) which makes these clays suitable for fast drying. The firing shrinkage and expansion are limited. A lower firing and drying temperature allow significant energy savings. Currently, these clays are used in the industry for manufacturing earthenware tiles. For the optimum exploitation of the clay materials and improvement of production conditions, a mathematical formulationis established for the drying parameters. These models predict drying shrinkage (d), bending strength after drying (b) and residual moisture (r) from initial moisture (m) and pressing pressure (p).

  20. Adjustment parameters in the Betts-Miller scheme of convection over South America

    NASA Astrophysics Data System (ADS)

    Luís Gomes, Jorge; Chou, Sin Chan; Lorena Guida, Lucas

    2013-04-01

    The Eta model has been used operationally at CPTEC/INPE since 1996. This model uses the Betts-Miller-Janjic (BMJ) convection scheme. The BMJ scheme was developed based on the convective adjustment. For the construction of reference temperature and moisture reference profiles, three parameters were defined, namely, the stability weight; the saturation pressure departure values and the adjustment time period. To define an optimum set of parameters over South America, a number of experiments have been carried out at CPTEC/INPE and the better set was adopted for the operational runs. The set of parameters are homogeneous over the domain covered by the model and kept constant for the whole year. These homogeneous specified profiles should provide misleading representations of various vertical structures. In this work the Eta model was configured with 40-km grid sizes and vertical resolution was set to 38 layers. The model domain covers the whole South America and part of Central America. The BMJ was changed to permit different set of parameters values at each model grid. We noted in the control runs that the Equitable threat and bias scores of quantitative precipitation forecasts (QPF) shows a different skills depending of verify region. A pronounced high bias in precipitation forecast was verified at mountain slopes, near the peak over Minas Gerais State, which is located at southeast of Brazil. Experiments were done changing the saturation pressure departure values, only near the mountains peaks. We note that the changes in the saturation pressure departure experiments produced different distributions and amounts of total precipitation. Results indicate that the changes reduced the precipitation bias over the mountains. The Eta model that uses the BMJ scheme has the characteristic to produced most of model total precipitation. The experiments changed the partition of implicit and explicit precipitation.

  1. Adjustment of pelvispinal parameters preserves the constant gravity line position.

    PubMed

    Geiger, Emanuel V; Müller, Otto; Niemeyer, Thomas; Kluba, Torsten

    2007-04-01

    There is a high variance in sagittal morphology and complaints between different subjects suffering from spinal disorders. Sagittal spinal alignment and clinical presentation are not closely related. Different parameters have been used to describe the pelvispinal morphology based on standing lateral radiographs. We conducted a study using radiography of the lumbar spine combined with force platform data to examine the correlation between pelvispinal parameters and the gravity line position. Fifty consecutive patients with a mean age of 55 years (18-84 years) were compared to normal controls. Among patients we found a statistically significant correlation between the following spinal parameters: lumbar lordosis and sacral slope (r=0.77; P<0.001), sacral slope and pelvic incidence (r=0.72; P<0.001) and pelvic tilt and overhang (r=-0.93; P<0.001). In patients and controls, the gravity line position was found to be located at 60 and 61%, respectively, of the foot length measured from the great toe, ranging from 53 to 69%, when corrected for the individual foot length. The results indicate that subjects with and without spinal disorders have their gravity line position localised within a very small range despite the high variability for lumbar lordosis and pelvic tilt.

  2. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection.

    PubMed

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-08-28

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.

  3. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection

    PubMed Central

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-01-01

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671

  4. Testing a theoretical model for examining the relationship between family adjustment and expatriates' work adjustment.

    PubMed

    Caligiuri, P M; Hyland, M M; Joshi, A; Bross, A S

    1998-08-01

    Based on theoretical perspectives from the work/family literature, this study tested a model for examining expatriate families' adjustment while on global assignments as an antecedent to expatriates' adjustment to working in a host country. Data were collected from 110 families that had been relocated for global assignments. Longitudinal data, assessing family characteristics before the assignment and cross-cultural adjustment approximately 6 months into the assignment, were coded. This study found that family characteristics (family support, family communication, family adaptability) were related to expatriates' adjustment to working in the host country. As hypothesized, the families' cross-cultural adjustment mediated the effect of family characteristics on expatriates' host-country work adjustment.

  5. Effect of flux adjustments on temperature variability in climate models

    NASA Astrophysics Data System (ADS)

    CMIP investigators; Duffy, P. B.; Bell, J.; Covey, C.; Sloan, L.

    2000-03-01

    It has been suggested that “flux adjustments” in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.

  6. Parameter extraction and transistor models

    NASA Technical Reports Server (NTRS)

    Rykken, Charles; Meiser, Verena; Turner, Greg; Wang, QI

    1985-01-01

    Using specified mathematical models of the MOSFET device, the optimal values of the model-dependent parameters were extracted from data provided by the Jet Propulsion Laboratory (JPL). Three MOSFET models, all one-dimensional were used. One of the models took into account diffusion (as well as convection) currents. The sensitivity of the models was assessed for variations of the parameters from their optimal values. Lines of future inquiry are suggested on the basis of the behavior of the devices, of the limitations of the proposed models, and of the complexity of the required numerical investigations.

  7. Photovoltaic module parameters acquisition model

    NASA Astrophysics Data System (ADS)

    Cibira, Gabriel; Koščová, Marcela

    2014-09-01

    This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I-V and P-V characteristics for PV module based on equivalent electrical circuit. Then, limited I-V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.

  8. Reductions in particulate and NO(x) emissions by diesel engine parameter adjustments with HVO fuel.

    PubMed

    Happonen, Matti; Heikkilä, Juha; Murtonen, Timo; Lehto, Kalle; Sarjovaara, Teemu; Larmi, Martti; Keskinen, Jorma; Virtanen, Annele

    2012-06-05

    Hydrotreated vegetable oil (HVO) diesel fuel is a promising biofuel candidate that can complement or substitute traditional diesel fuel in engines. It has been already reported that by changing the fuel from conventional EN590 diesel to HVO decreases exhaust emissions. However, as the fuels have certain chemical and physical differences, it is clear that the full advantage of HVO cannot be realized unless the engine is optimized for the new fuel. In this article, we studied how much exhaust emissions can be reduced by adjusting engine parameters for HVO. The results indicate that, with all the studied loads (50%, 75%, and 100%), particulate mass and NO(x) can both be reduced over 25% by engine parameter adjustments. Further, the emission reduction was even higher when the target for adjusting engine parameters was to exclusively reduce either particulates or NO(x). In addition to particulate mass, different indicators of particulate emissions were also compared. These indicators included filter smoke number (FSN), total particle number, total particle surface area, and geometric mean diameter of the emitted particle size distribution. As a result of this comparison, a linear correlation between FSN and total particulate surface area at low FSN region was found.

  9. Identification of driver model parameters.

    PubMed

    Reński, A

    2001-01-01

    The paper presents a driver model, which can be used in a computer simulation of a curved ride of a car. The identification of the driver parameters consisted in a comparison of the results of computer calculations obtained for the driver-vehicle-environment model with different driver data sets with test results of the double lane-change manoeuvre (Standard No. ISO/TR 3888:1975, International Organization for Standardization [ISO], 1975) and the wind gust manoeuvre. The optimisation method allows to choose for each real driver a set of driver model parameters for which the differences between test and calculation results are smallest. The presented driver model can be used in investigating the driver-vehicle control system, which allows to adapt the car construction to the psychophysical characteristics of a driver.

  10. Parameter identification in continuum models

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.

    1983-01-01

    Approximation techniques for use in numerical schemes for estimating spatially varying coefficients in continuum models such as those for Euler-Bernoulli beams are discussed. The techniques are based on quintic spline state approximations and cubic spline parameter approximations. Both theoretical and numerical results are presented.

  11. Covariate-Adjusted Linear Mixed Effects Model with an Application to Longitudinal Data

    PubMed Central

    Nguyen, Danh V.; Şentürk, Damla; Carroll, Raymond J.

    2009-01-01

    Linear mixed effects (LME) models are useful for longitudinal data/repeated measurements. We propose a new class of covariate-adjusted LME models for longitudinal data that nonparametrically adjusts for a normalizing covariate. The proposed approach involves fitting a parametric LME model to the data after adjusting for the nonparametric effects of a baseline confounding covariate. In particular, the effect of the observable covariate on the response and predictors of the LME model is modeled nonparametrically via smooth unknown functions. In addition to covariate-adjusted estimation of fixed/population parameters and random effects, an estimation procedure for the variance components is also developed. Numerical properties of the proposed estimators are investigated with simulation studies. The consistency and convergence rates of the proposed estimators are also established. An application to a longitudinal data set on calcium absorption, accounting for baseline distortion from body mass index, illustrates the proposed methodology. PMID:19266053

  12. Deformation measurement using digital image correlation by adaptively adjusting the parameters

    NASA Astrophysics Data System (ADS)

    Zhao, Jian

    2016-12-01

    As a contactless full-field displacement and strain measurement technique, two-dimensional digital image correlation (DIC) has been increasingly employed to reconstruct in-plane deformation in the field of experimental mechanics. In practical application, it has been demonstrated that the selection of subset size and search zone size exerts a critical influence on measurement results of DIC, especially when decorrelation occurs between the reference image and the deformed image due to large deformation over the search zone involved. Correlation coefficient is an important parameter in DIC, and it also makes the most direct connection between subset size and search zone. A self-adaptive correlation parameter adjustment method based on correlation coefficient threshold to realize measurement efficiently by adjusting the size of the subset and search zone in a self-adaptive approach is proposed. The feasibility and effectiveness of the proposed method are verified through a set of experiments, which indicates that the presented algorithm is able to significantly reduce the cumbersome trial calculation as compared with the traditional DIC, in which the initial correlation parameters needed to be manually selected in advance based on practical experience.

  13. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    PubMed

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  14. Parametric estimation of quality adjusted lifetime (QAL) distribution in progressive illness--death model.

    PubMed

    Pradhan, Biswabrata; Dewanji, Anup

    2009-07-10

    In this work, we consider the parametric estimation of quality adjusted lifetime (QAL) distribution in progressive illness-death models. The main idea of this paper is to derive the theoretical distribution of QAL for the progressive illness-death models, under parametric models for the sojourn time distributions in different states, and then replace the model parameters by their estimates obtained by standard techniques of survival analysis. The method of estimation of the model parameters is also described. A data set of IBCSG Trial V has been analyzed for illustration. Extension to more general illness-death models is also discussed.

  15. Shape adjustment of cable mesh reflector antennas considering modeling uncertainties

    NASA Astrophysics Data System (ADS)

    Du, Jingli; Bao, Hong; Cui, Chuanzhen

    2014-04-01

    Cable mesh antennas are the most important implement to construct large space antennas nowadays. Reflector surface of cable mesh antennas has to be carefully adjusted to achieve required accuracy, which is an effective way to compensate manufacturing and assembly errors or other imperfections. In this paper shape adjustment of cable mesh antennas is addressed. The required displacement of the reflector surface is determined with respect to a modified paraboloid whose axial vertex offset is also considered as a variable. Then the adjustment problem is solved by minimizing the RMS error with respect to the desired paraboloid using minimal norm least squares method. To deal with the modeling uncertainties, the adjustment is achieved by solving a simple worst-case optimization problem instead of directly using the least squares method. A numerical example demonstrates the worst-case method is of good convergence and accuracy, and is robust to perturbations.

  16. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  17. Study on Optimization of Electromagnetic Relay's Reaction Torque Characteristics Based on Adjusted Parameters

    NASA Astrophysics Data System (ADS)

    Zhai, Guofu; Wang, Qiya; Ren, Wanbin

    The cooperative characteristics of electromagnetic relay's attraction torque and reaction torque are the key property to ensure its reliability, and it is important to attain better cooperative characteristics by analyzing and optimizing relay's electromagnetic system and mechanical system. From the standpoint of changing reaction torque of mechanical system, in this paper, adjusted parameters (armature's maximum angular displacement αarm_max, initial return spring's force Finiti_return_spring, normally closed (NC) contacts' force FNC_contacts, contacts' gap δgap, and normally opened (NO) contacts' over travel δNO_contacts) were adopted as design variables, and objective function was provided for with the purpose of increasing breaking velocities of both NC contacts and NO contacts. Finally, genetic algorithm (GA) was used to attain optimization of the objective function. Accuracy of calculation for the relay's dynamic characteristics was verified by experiment.

  18. Initial experience in operation of furnace burners with adjustable flame parameters

    SciTech Connect

    Garzanov, A.L.; Dolmatov, V.L.; Saifullin, N.R.

    1995-07-01

    The designs of burners currently used in tube furnaces (CP, FGM, GMG, GIK, GNF, etc.) do not have any provision for adjusting the heat-transfer characteristics of the flame, since the gas and air feed systems in these burners do not allow any variation of the parameters of mixture formation, even though this process is critical in determining the length, shape, and luminosity of the flame and also the furnace operating conditions: efficiency, excess air coefficient, flue gas temperature at the bridgewall, and other indexes. In order to provide the controlling the heat-transfer characteristics of the flame, the Elektrogorsk Scientific-Research Center (ENITs), on the assignment of the Novo-Ufa Petroleum Refinery, developed a burner with diffusion regulation of the flame. The gas nozzle of the burner is made up of two coaxial gas chambers 1 and 2, with independent feed of gas from a common line through two supply lines.

  19. Illumination-parameter adjustable and illumination-distribution visible LED helmet for low-level light therapy on brain injury

    NASA Astrophysics Data System (ADS)

    Wang, Pengbo; Gao, Yuan; Chen, Xiao; Li, Ting

    2016-03-01

    Low-level light therapy (LLLT) has been clinically applied. Recently, more and more cases are reported with positive therapeutic effect by using transcranial light emitting diodes (LEDs) illumination. Here, we developed a LLLT helmet for treating brain injuries based on LED arrays. We designed the LED arrays in circle shape and assembled them in multilayered 3D printed helmet with water-cooling module. The LED arrays can be adjust to touch the head of subjects. A control circuit was developed to drive and control the illumination of the LLLT helmet. The software portion provides the control of on and off of each LED arrays, the setup of illumination parameters, and 3D distribution of LLLT light dose in human subject according to the illumination setups. This LLLT light dose distribution was computed by a Monte Carlo model for voxelized media and the Visible Chinese Human head dataset and displayed in 3D view at the background of head anatomical structure. The performance of the whole system was fully tested. One stroke patient was recruited in the preliminary LLLT experiment and the following neuropsychological testing showed obvious improvement in memory and executive functioning. This clinical case suggested the potential of this Illumination-parameter adjustable and illuminationdistribution visible LED helmet as a reliable, noninvasive, and effective tool in treating brain injuries.

  20. Adjustment in mothers of children with Asperger syndrome: an application of the double ABCX model of family adjustment.

    PubMed

    Pakenham, Kenneth I; Samios, Christina; Sofronoff, Kate

    2005-05-01

    The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between 10 and 12 years. Results of correlations showed that each of the model components was related to one or more domains of maternal adjustment in the direction predicted, with the exception of problem-focused coping. Hierarchical regression analyses demonstrated that, after controlling for the effects of relevant demographics, stressor severity, pile-up of demands and coping were related to adjustment. Findings indicate the utility of the double ABCX model in guiding research into parental adjustment when caring for a child with Asperger syndrome. Limitations of the study and clinical implications are discussed.

  1. Ascertainment-adjusted parameter estimation approach to improve robustness against misspecification of health monitoring methods

    NASA Astrophysics Data System (ADS)

    Juesas, P.; Ramasso, E.

    2016-12-01

    Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.

  2. Age of dam and sex of calf adjustments and genetic parameters for gestation length in Charolais cattle.

    PubMed

    Crews, D H

    2006-01-01

    To estimate adjustment factors and genetic parameters for gestation length (GES), AI and calving date records (n = 40,356) were extracted from the Canadian Charolais Association field database. The average time from AI to calving date was 285.2 d (SD = 4.49 d) and ranged from 274 to 296 d. Fixed effects were sex of calf, age of dam (2, 3, 4, 5 to 10, > or = 11 yr), and gestation contemporary group (year of birth x herd of origin). Variance components were estimated using REML and 4 animal models (n = 84,332) containing from 0 to 3 random maternal effects. Model 1 (M1) contained only direct genetic effects. Model 2 (M2) was G1 plus maternal genetic effects with the direct x maternal genetic covariance constrained to zero, and model 3 (M3) was G2 without the covariance constraint. Model 4 (M4) extended G3 to include a random maternal permanent environmental effect. Direct heritability estimates were high and similar among all models (0.61 to 0.64), and maternal heritability estimates were low, ranging from 0.01 (M2) to 0.09 (M3). Likelihood ratio tests and parameter estimates suggested that M4 was the most appropriate (P < 0.05) model. With M4, phenotypic variance (18.35 d2) was partitioned into direct and maternal genetic, and maternal permanent environmental components (hd2 = 0.64 +/- 0.04, hm2 = 0.07 +/- 0.01, r(d,m) = -0.37 +/- 0.06, and c2 = 0.03 +/- 0.01, respectively). Linear contrasts were used to estimate that bull calves gestated 1.26 d longer (P < 0.02) than heifers, and adjustments to a mature equivalent (5 to 10 yr old) age of dam were 1.49 (P < 0.01), 0.56 (P < 0.01), 0.33 (P < 0.01), and -0.24 (P < 0.14) d for GES records of calves born to 2-, 3-, 4-, and > or = 11-yr-old cows, respectively. Bivariate animal models were used to estimate genetic parameters for GES with birth and adjusted 205-d weaning weights, and postweaning gain. Direct GES was positively correlated with direct birth weight (BWT; 0.34 +/- 0.04) but negatively correlated with maternal

  3. Coercively Adjusted Auto Regression Model for Forecasting in Epilepsy EEG

    PubMed Central

    Kim, Sun-Hee; Faloutsos, Christos; Yang, Hyung-Jeong

    2013-01-01

    Recently, data with complex characteristics such as epilepsy electroencephalography (EEG) time series has emerged. Epilepsy EEG data has special characteristics including nonlinearity, nonnormality, and nonperiodicity. Therefore, it is important to find a suitable forecasting method that covers these special characteristics. In this paper, we propose a coercively adjusted autoregression (CA-AR) method that forecasts future values from a multivariable epilepsy EEG time series. We use the technique of random coefficients, which forcefully adjusts the coefficients with −1 and 1. The fractal dimension is used to determine the order of the CA-AR model. We applied the CA-AR method reflecting special characteristics of data to forecast the future value of epilepsy EEG data. Experimental results show that when compared to previous methods, the proposed method can forecast faster and accurately. PMID:23710252

  4. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion

    PubMed Central

    Huang, Lam O.; Infante-Rivard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies. PMID:27630667

  5. End-point parameter adjustment on a small desk-top programmable calculator for logit-log analysis of radioimmunoassay data.

    PubMed

    Hatch, K F; Coles, E; Busey, H; Goldman, S C

    1976-08-01

    We describe an improved method of logit-log curve fitting, by adjusting end-point parameters in radioimmunoassay studies, for use with a small desk-top programmable calculator. Straight logit-log analyses are often deficient because of their high sensitivity to small errors in the end-point parametes B0 and NSB (the actual measured activity in the tubes). The literature suggests techniques for adjusting these end-point parameters, but they require too much computing time and programming space to be used with a desk-top programmable calculator. The extension to the logit-log model presented here is easily handled by the programmable calculator and provides a good estimate of the change required in B0 and NSB to obtain a better fit. The program requires 1.5 min to run on our desk-top programmable calculator, and has resulted in improved data analysis for all of the 11 types of radioimmunoassay studied.

  6. Parameter Identifiability of Fundamental Pharmacodynamic Models

    PubMed Central

    Janzén, David L. I.; Bergenholm, Linnéa; Jirstrand, Mats; Parkinson, Joanna; Yates, James; Evans, Neil D.; Chappell, Michael J.

    2016-01-01

    Issues of parameter identifiability of routinely used pharmacodynamics models are considered in this paper. The structural identifiability of 16 commonly applied pharmacodynamic model structures was analyzed analytically, using the input-output approach. Both fixed-effects versions (non-population, no between-subject variability) and mixed-effects versions (population, including between-subject variability) of each model structure were analyzed. All models were found to be structurally globally identifiable under conditions of fixing either one of two particular parameters. Furthermore, an example was constructed to illustrate the importance of sufficient data quality and show that structural identifiability is a prerequisite, but not a guarantee, for successful parameter estimation and practical parameter identifiability. This analysis was performed by generating artificial data of varying quality to a structurally identifiable model with known true parameter values, followed by re-estimation of the parameter values. In addition, to show the benefit of including structural identifiability as part of model development, a case study was performed applying an unidentifiable model to real experimental data. This case study shows how performing such an analysis prior to parameter estimation can improve the parameter estimation process and model performance. Finally, an unidentifiable model was fitted to simulated data using multiple initial parameter values, resulting in highly different estimated uncertainties. This example shows that although the standard errors of the parameter estimates often indicate a structural identifiability issue, reasonably “good” standard errors may sometimes mask unidentifiability issues. PMID:27994553

  7. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  8. Understanding Parameter Invariance in Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Rupp, Andre A.; Zumbo, Bruno D.

    2006-01-01

    One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…

  9. Study of dual wavelength composite output of solid state laser based on adjustment of resonator parameters

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Nie, Jinsong; Wang, Xi; Hu, Yuze

    2016-10-01

    The 1064nm fundamental wave (FW) and the 532nm second harmonic wave (SHW) of Nd:YAG laser have been widely applied in many fields. In some military applications requiring interference in both visible and near-infrared spectrum range, the de-identification interference technology based on the dual wavelength composite output of FW and SHW offers an effective way of making the device or equipment miniaturized and low cost. In this paper, the application of 1064nm and 532nm dual-wavelength composite output technology in military electro-optical countermeasure is studied. A certain resonator configuration that can achieve composite laser output with high power, high beam quality and high repetition rate is proposed. Considering the thermal lens effect, the stability of this certain resonator is analyzed based on the theory of cavity transfer matrix. It shows that with the increase of thermal effect, the intracavity fundamental mode volume decreased, resulting the peak fluctuation of cavity stability parameter. To explore the impact the resonator parameters does to characteristics and output ratio of composite laser, the solid-state laser's dual-wavelength composite output models in both continuous and pulsed condition are established by theory of steady state equation and rate equation. Throughout theoretical simulation and analysis, the optimal KTP length and best FW transmissivity are obtained. The experiment is then carried out to verify the correctness of theoretical calculation result.

  10. Screening parameters for the relativistic hydrogenic model

    NASA Astrophysics Data System (ADS)

    Lanzini, Fernando; Di Rocco, Héctor O.

    2015-12-01

    We present a Relativistic Screened Hydrogenic Model (RSHM) where the screening parameters depend on the variables (n , l , j) and the parameters (Z , N) . These screening parameters were derived theoretically in a neat form with no use of experimental values nor numerical values from self-consistent codes. The results of the model compare favorably with those obtained by using more sophisticated approaches. For the interested reader, a copy of our code can be requested from the corresponding author.

  11. Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment

    NASA Astrophysics Data System (ADS)

    Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.

    2016-08-01

    This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.

  12. On Interpreting the Model Parameters for the Three Parameter Logistic Model

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    This paper addresses two problems relating to the interpretability of the model parameters in the three parameter logistic model. First, it is shown that if the values of the discrimination parameters are all the same, the remaining parameters are nonidentifiable in a nontrivial way that involves not only ability and item difficulty, but also the…

  13. Mathematical modeling on experimental protocol of glucose adjustment for non-invasive blood glucose sensing

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Min, Xiaolin; Zou, Da; Xu, Kexin

    2012-03-01

    Currently, blood glucose concentration levels from OGTT(Oral Glucose Tolerance Test) results are used to build PLS model in noninvasive blood glucose sensing by Near-Infrared(NIR) Spectroscopy. However, the univocal dynamic change trend of blood glucose concentration based on OGTT results is not various enough to provide comprehensive data to make PLS model robust and accurate. In this talk, with the final purpose of improving the stability and accuracy of the PLS model, we introduced an integrated minimal model(IMM) of glucose metabolism system. First, by adjusting parameters, which represent different metabolism characteristics and individual differences, comparatively ideal mediation programs to different groups of people, even individuals were customized. Second, with different glucose input types(oral method, intravenous injection, or intravenous drip), we got various changes of blood glucose concentration. And by studying the adjustment methods of blood glucose concentration, we would thus customize corresponding experimental protocols of glucose adjustment to different people for noninvasive blood glucose concentration and supply comprehensive data for PLS model.

  14. Variable deceleration parameter and dark energy models

    NASA Astrophysics Data System (ADS)

    Bishi, Binaya K.

    2016-03-01

    This paper deals with the Bianchi type-III dark energy model and equation of state parameter in a first class of f(R,T) gravity. Here, R and T represents the Ricci scalar and trace of the energy momentum tensor, respectively. The exact solutions of the modified field equations are obtained by using (i) linear relation between expansion scalar and shear scalar, (ii) linear relation between state parameter and skewness parameter and (iii) variable deceleration parameter. To obtain the physically plausible cosmological models, the variable deceleration parameter with the suitable substitution leads to the scale factor of the form a(t) = [sinh(αt)] 1 n, where α and n > 0 are arbitrary constants. It is observed that our models are accelerating for 0 < n < 1 and for n > 1, transition phase from deceleration to acceleration. Further, we have discussed physical properties of the models.

  15. Dose Adjustment Strategy of Cyclosporine A in Renal Transplant Patients: Evaluation of Anthropometric Parameters for Dose Adjustment and C0 vs. C2 Monitoring in Japan, 2001-2010

    PubMed Central

    Kokuhu, Takatoshi; Fukushima, Keizo; Ushigome, Hidetaka; Yoshimura, Norio; Sugioka, Nobuyuki

    2013-01-01

    The optimal use and monitoring of cyclosporine A (CyA) have remained unclear and the current strategy of CyA treatment requires frequent dose adjustment following an empirical initial dosage adjusted for total body weight (TBW). The primary aim of this study was to evaluate age and anthropometric parameters as predictors for dose adjustment of CyA; and the secondary aim was to compare the usefulness of the concentration at predose (C0) and 2-hour postdose (C2) monitoring. An open-label, non-randomized, retrospective study was performed in 81 renal transplant patients in Japan during 2001-2010. The relationships between the area under the blood concentration-time curve (AUC0-9) of CyA and its C0 or C2 level were assessed with a linear regression analysis model. In addition to age, 7 anthropometric parameters were tested as predictors for AUC0-9 of CyA: TBW, height (HT), body mass index (BMI), body surface area (BSA), ideal body weight (IBW), lean body weight (LBW), and fat free mass (FFM). Correlations between AUC0-9 of CyA and these parameters were also analyzed with a linear regression model. The rank order of the correlation coefficient was C0 > C2 (C0; r=0.6273, C2; r=0.5562). The linear regression analyses between AUC0-9 of CyA and candidate parameters indicated their potential usefulness from the following rank order: IBW > FFM > HT > BSA > LBW > TBW > BMI > Age. In conclusion, after oral administration, C2 monitoring has a large variation and could be at high risk for overdosing. Therefore, after oral dosing of CyA, it was not considered to be a useful approach for single monitoring, but should rather be used with C0 monitoring. The regression analyses between AUC0-9 of CyA and anthropometric parameters indicated that IBW was potentially the superior predictor for dose adjustment of CyA in an empiric strategy using TBW (IBW; r=0.5181, TBW; r=0.3192); however, this finding seems to lack the pharmacokinetic rationale and thus warrants further basic and clinical

  16. Adjusting power for a baseline covariate in linear models

    PubMed Central

    Glueck, Deborah H.; Muller, Keith E.

    2009-01-01

    SUMMARY The analysis of covariance provides a common approach to adjusting for a baseline covariate in medical research. With Gaussian errors, adding random covariates does not change either the theory or the computations of general linear model data analysis. However, adding random covariates does change the theory and computation of power analysis. Many data analysts fail to fully account for this complication in planning a study. We present our results in five parts. (i) A review of published results helps document the importance of the problem and the limitations of available methods. (ii) A taxonomy for general linear multivariate models and hypotheses allows identifying a particular problem. (iii) We describe how random covariates introduce the need to consider quantiles and conditional values of power. (iv) We provide new exact and approximate methods for power analysis of a range of multivariate models with a Gaussian baseline covariate, for both small and large samples. The new results apply to the Hotelling-Lawley test and the four tests in the “univariate” approach to repeated measures (unadjusted, Huynh-Feldt, Geisser-Greenhouse, Box). The techniques allow rapid calculation and an interactive, graphical approach to sample size choice. (v) Calculating power for a clinical trial of a treatment for increasing bone density illustrates the new methods. We particularly recommend using quantile power with a new Satterthwaite-style approximation. PMID:12898543

  17. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  18. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    ERIC Educational Resources Information Center

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  19. Exploiting intrinsic fluctuations to identify model parameters.

    PubMed

    Zimmer, Christoph; Sahle, Sven; Pahle, Jürgen

    2015-04-01

    Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non-identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non-identifiable. The authors present a method to identify model parameters that are structurally non-identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one-dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system's behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration-death, gene expression and Epo-EpoReceptor interaction, that this resolves the non-identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.

  20. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  1. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    NASA Astrophysics Data System (ADS)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  2. Modeling pattern in collections of parameters

    USGS Publications Warehouse

    Link, W.A.

    1999-01-01

    Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.

  3. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

    PubMed

    Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

    2016-08-15

    Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard

    PubMed Central

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase “RHINE WAAL UNIVERSITY” with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy). PMID:26733788

  5. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard.

    PubMed

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase "RHINE WAAL UNIVERSITY" with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy).

  6. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  7. Systematic parameter inference in stochastic mesoscopic modeling

    NASA Astrophysics Data System (ADS)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  8. Models and parameters for environmental radiological assessments

    SciTech Connect

    Miller, C W

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  9. Adjusting exposure limits for long and short exposure periods using a physiological pharmacokinetic model.

    PubMed

    Andersen, M E; MacNaughton, M G; Clewell, H J; Paustenbach, D J

    1987-04-01

    The rationale for adjusting occupational exposure limits for unusual work schedules is to assure, as much as possible, that persons on these schedules are placed at no greater risk of injury or discomfort than persons who work a standard 8 hr/day, 40 hr/week. For most systemic toxicants, the risk index upon which the adjustments are made will be either peak blood concentration or integrated tissue dose, depending on what chemical's presumed mechanism of toxicity. Over the past ten years, at least four different models have been proposed for adjusting exposure limits for unusually short and long work schedules. This paper advocates use of a physiologically-based pharmacokinetic (PB-PK) model for determining adjustment factors for unusual exposure schedules, an approach that should be more accurate than those proposed previously. The PB-PK model requires data on the blood:air and tissue:blood partition coefficients, the rate of metabolism of the chemical, organ volumes, organ blood flows and ventilation rates in humans. Laboratory data on two industrially important chemicals--styrene and methylene chloride--were used to illustrate the PB-PK approach. At inhaled concentrations near their respective 8-hr Threshold Limit Value-Time-weighted averages (TLV-TWAs), both of these chemicals are primarily eliminated from the body by metabolism. For these two chemicals, the appropriate risk indexing parameters are integrated tissue dose or total amount of parent chemical metabolized. Since methylene chloride is metabolized to carbon monoxide, the maximum blood carboxyhemoglobin concentrations also might be useful as an index of risk for this chemical.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model

    PubMed Central

    Pande, Vijay S.; Head-Gordon, Teresa; Ponder, Jay W.

    2016-01-01

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. The protocol uses an automated procedure, ForceBalance, to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimentally obtained data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The new AMOEBA14 water model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures ranging from 249 K to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to a variety of experimental properties as a function of temperature, including the 2nd virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient and dielectric constant. The viscosity, self-diffusion constant and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2 to 20 water molecules, the AMOEBA14 model yields results similar to the AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601

  11. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    PubMed

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  12. Estimation of Model Parameters for Steerable Needles

    PubMed Central

    Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451

  13. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  14. An automated approach for tone mapping operator parameter adjustment in security applications

    NASA Astrophysics Data System (ADS)

    Krasula, LukáÅ.¡; Narwaria, Manish; Le Callet, Patrick

    2014-05-01

    High Dynamic Range (HDR) imaging has been gaining popularity in recent years. Different from the traditional low dynamic range (LDR), HDR content tends to be visually more appealing and realistic as it can represent the dynamic range of the visual stimuli present in the real world. As a result, more scene details can be faithfully reproduced. As a direct consequence, the visual quality tends to improve. HDR can be also directly exploited for new applications such as video surveillance and other security tasks. Since more scene details are available in HDR, it can help in identifying/tracking visual information which otherwise might be difficult with typical LDR content due to factors such as lack/excess of illumination, extreme contrast in the scene, etc. On the other hand, with HDR, there might be issues related to increased privacy intrusion. To display the HDR content on the regular screen, tone-mapping operators (TMO) are used. In this paper, we present the universal method for TMO parameters tuning, in order to maintain as many details as possible, which is desirable in security applications. The method's performance is verified on several TMOs by comparing the outcomes from tone-mapping with default and optimized parameters. The results suggest that the proposed approach preserves more information which could be of advantage for security surveillance but, on the other hand, makes us consider possible increase in privacy intrusion.

  15. Dolphins Adjust Species-Specific Frequency Parameters to Compensate for Increasing Background Noise

    PubMed Central

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles’ frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise. PMID:25853825

  16. Dolphins adjust species-specific frequency parameters to compensate for increasing background noise.

    PubMed

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles' frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise.

  17. Analysis of Modeling Parameters on Threaded Screws.

    SciTech Connect

    Vigil, Miquela S.; Brake, Matthew Robert; Vangoethem, Douglas

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.

  18. Parameter identification and modeling of longitudinal aerodynamics

    NASA Technical Reports Server (NTRS)

    Aksteter, J. W.; Parks, E. K.; Bach, R. E., Jr.

    1995-01-01

    Using a comprehensive flight test database and a parameter identification software program produced at NASA Ames Research Center, a math model of the longitudinal aerodynamics of the Harrier aircraft was formulated. The identification program employed the equation error method using multiple linear regression to estimate the nonlinear parameters. The formulated math model structure adhered closely to aerodynamic and stability/control theory, particularly with regard to compressibility and dynamic manoeuvring. Validation was accomplished by using a three degree-of-freedom nonlinear flight simulator with pilot inputs from flight test data. The simulation models agreed quite well with the measured states. It is important to note that the flight test data used for the validation of the model was not used in the model identification.

  19. Multiple confidence intervals for selected parameters adjusted for the false coverage rate in monotone dose-response microarray experiments.

    PubMed

    Peng, Jianan; Liu, Wei; Bretz, Frank; Shkedy, Ziv

    2016-12-26

    Benjamini and Yekutieli () introduced the concept of the false coverage-statement rate (FCR) to account for selection when the confidence intervals (CIs) are constructed only for the selected parameters. Dose-response analysis in dose-response microarray experiments is conducted only for genes having monotone dose-response relationship, which is a selection problem. In this paper, we consider multiple CIs for the mean gene expression difference between the highest dose and control in monotone dose-response microarray experiments for selected parameters adjusted for the FCR. A simulation study is conducted to study the performance of the method proposed. The method is applied to a real dose-response microarray experiment with 16, 998 genes for illustration.

  20. Parameter Estimation of Spacecraft Fuel Slosh Model

    NASA Technical Reports Server (NTRS)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  1. Comparison of Inorganic Carbon System Parameters Measured in the Atlantic Ocean from 1990 to 1998 and Recommended Adjustments

    SciTech Connect

    Wanninkhof, R.

    2003-05-21

    As part of the global synthesis effort sponsored by the Global Carbon Cycle project of the National Oceanic and Atmospheric Administration (NOAA) and U.S. Department of Energy, a comprehensive comparison was performed of inorganic carbon parameters measured on oceanographic surveys carried out under auspices of the Joint Global Ocean Flux Study and related programs. Many of the cruises were performed as part of the World Hydrographic Program of the World Ocean Circulation Experiment and the NOAA Ocean-Atmosphere Carbon Exchange Study. Total dissolved inorganic carbon (DIC), total alkalinity (TAlk), fugacity of CO{sub 2}, and pH data from twenty-three cruises were checked to determine whether there were systematic offsets of these parameters between cruises. The focus was on the DIC and TAlk state variables. Data quality and offsets of DIC and TAlk were determined by using several different techniques. One approach was based on crossover analyses, where the deep-water concentrations of DIC and TAlk were compared for stations on different cruises that were within 100 km of each other. Regional comparisons were also made by using a multiple-parameter linear regression technique in which DIC or TAlk was regressed against hydrographic and nutrient parameters. When offsets of greater than 4 {micro}mol/kg were observed for DIC and/or 6 {micro}mol/kg were observed for TAlk, the data taken on the cruise were closely scrutinized to determine whether the offsets were systematic. Based on these analyses, the DIC data and TAlk data of three cruises were deemed of insufficient quality to be included in the comprehensive basinwide data set. For several of the cruises, small adjustments in TAlk were recommended for consistency with other cruises in the region. After these adjustments were incorporated, the inorganic carbon data from all cruises along with hydrographic, chlorofluorocarbon, and nutrient data were combined as a research quality product for the scientific community.

  2. EMG/ECG Acquisition System with Online Adjustable Parameters Using ZigBee Wireless Technology

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroyuki

    This paper deals with a novel wireless bio-signal acquisition system employing ZigBee wireless technology, which consists of mainly two components, that is, intelligent electrode and data acquisition host. The former is the main topic of this paper. It is put on a subject's body to amplify bio-signal such as EMG or ECG and stream its data at upto 2 ksps. One of the most remarkable feature of the intelligent electrode is that it can change its own parameters including both digital and analog ones on-line. The author describes its design first, then introduces a small, light and low cost implementation of the intelligent electrode named as “VAMPIRE-BAT.” And he show some experimental results to confirm its usability and to estimate its practical performances.

  3. Adjustment of regional climate model output for modeling the climatic mass balance of all glaciers on Svalbard.

    PubMed

    Möller, Marco; Obleitner, Friedrich; Reijmer, Carleen H; Pohjola, Veijo A; Głowacki, Piotr; Kohler, Jack

    2016-05-27

    Large-scale modeling of glacier mass balance relies often on the output from regional climate models (RCMs). However, the limited accuracy and spatial resolution of RCM output pose limitations on mass balance simulations at subregional or local scales. Moreover, RCM output is still rarely available over larger regions or for longer time periods. This study evaluates the extent to which it is possible to derive reliable region-wide glacier mass balance estimates, using coarse resolution (10 km) RCM output for model forcing. Our data cover the entire Svalbard archipelago over one decade. To calculate mass balance, we use an index-based model. Model parameters are not calibrated, but the RCM air temperature and precipitation fields are adjusted using in situ mass balance measurements as reference. We compare two different calibration methods: root mean square error minimization and regression optimization. The obtained air temperature shifts (+1.43°C versus +2.22°C) and precipitation scaling factors (1.23 versus 1.86) differ considerably between the two methods, which we attribute to inhomogeneities in the spatiotemporal distribution of the reference data. Our modeling suggests a mean annual climatic mass balance of -0.05 ± 0.40 m w.e. a(-1) for Svalbard over 2000-2011 and a mean equilibrium line altitude of 452 ± 200 m  above sea level. We find that the limited spatial resolution of the RCM forcing with respect to real surface topography and the usage of spatially homogeneous RCM output adjustments and mass balance model parameters are responsible for much of the modeling uncertainty. Sensitivity of the results to model parameter uncertainty is comparably small and of minor importance.

  4. Sensitivity assessment, adjustment, and comparison of mathematical models describing the migration of pesticides in soil using lysimetric data

    NASA Astrophysics Data System (ADS)

    Shein, E. V.; Kokoreva, A. A.; Gorbatov, V. S.; Umarova, A. B.; Kolupaeva, V. N.; Perevertin, K. A.

    2009-07-01

    The water block of physically founded models of different levels (chromatographic PEARL models and dual-porosity MACRO models) was parameterized using laboratory experimental data and tested using the results of studying the water regime of loamy soddy-podzolic soil in large lysimeters of the Experimental Soil Station of Moscow State University. The models were adapted using a stepwise approach, which involved the sequential assessment and adjustment of each submodel. The models unadjusted for the water block underestimated the lysimeter flow and overestimated the soil water content. The theoretical necessity of the model adjustment was explained by the different scales of the experimental objects (soil samples) and simulated phenomenon (soil profile). The adjustment of the models by selecting the most sensitive hydrophysical parameters of the soils (the approximation parameters of the soil water retention curve (SWRC)) gave good agreement between the predicted moisture profiles and their actual values. In distinction from the PEARL model, the MARCO model reliably described the migration of a pesticide through the soil profile, which confirmed the necessity of physically founded models accounting for the separation of preferential flows in the pore space for the prediction, analysis, optimization, and management of modern agricultural technologies.

  5. Laser-plasma SXR/EUV sources: adjustment of radiation parameters for specific applications

    NASA Astrophysics Data System (ADS)

    Bartnik, A.; Fiedorowicz, H.; Fok, T.; Jarocki, R.; Kostecki, J.; Szczurek, A.; Szczurek, M.; Wachulak, P.; Wegrzyński, Ł.

    2014-12-01

    In this work soft X-ray (SXR) and extreme ultraviolet (EUV) laser-produced plasma (LPP) sources employing Nd:YAG laser systems of different parameters are presented. First of them is a 10-Hz EUV source, based on a double-stream gaspuff target, irradiated with the 3-ns/0.8J laser pulse. In the second one a 10 ns/10 J/10 Hz laser system is employed and the third one utilizes the laser system with the pulse shorten to approximately 1 ns. Using various gases in the gas puff targets it is possible to obtain intense radiation in different wavelength ranges. This way intense continuous radiation in a wide spectral range as well as quasi-monochromatic radiation was produced. To obtain high EUV or SXR fluence the radiation was focused using three types of grazing incidence collectors and a multilayer Mo/Si collector. First of them is a multfoil gold plated collector consisted of two orthogonal stacks of ellipsoidal mirrors forming a double-focusing device. The second one is the ellipsoidal collector being part of the axisymmetrical ellipsoidal surface. Third of the collectors is composed of two aligned axisymmetrical paraboloidal mirrors optimized for focusing of SXR radiation. The last collector is an off-axis ellipsoidal multilayer Mo/Si mirror allowing for efficient focusing of the radiation in the spectral region centered at λ = 13.5 ± 0.5 nm. In this paper spectra of unaltered EUV or SXR radiation produced in different LPP source configurations together with spectra and fluence values of focused radiation are presented. Specific configurations of the sources were assigned to various applications.

  6. Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools

    NASA Astrophysics Data System (ADS)

    Kollo, Karin; Spada, Giorgio; Vermeer, Martin

    2013-04-01

    Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every

  7. Effects of model deficiencies on parameter estimation

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.

    1988-01-01

    Reliable structural dynamic models will be required as a basis for deriving the reduced-order plant models used in control systems for large space structures. Ground vibration testing and model verification will play an important role in the development of these models; however, fundamental differences between the space environment and earth environment, as well as variations in structural properties due to as-built conditions, will make on-orbit identification essential. The efficiency, and perhaps even the success, of on-orbit identification will depend on having a valid model of the structure. It is envisioned that the identification process will primarily involve parametric methods. Given a correct model, a variety of estimation algorithms may be used to estimate parameter values. This paper explores the effects of modeling errors and model deficiencies on parameter estimation by reviewing previous case histories. The effects depend at least to some extent on the estimation algorithm being used. Bayesian estimation was used in the case histories presented here. It is therefore conceivable that the behavior of an estimation algorithm might be useful in detecting and possibly even diagnosing deficiencies. In practice, the task is complicated by the presence of systematic errors in experimental procedures and data processing and in the use of the estimation procedures themselves.

  8. Women's Work Conditions and Marital Adjustment in Two-Earner Couples: A Structural Model.

    ERIC Educational Resources Information Center

    Sears, Heather A.; Galambos, Nancy L.

    1992-01-01

    Evaluated structural model of women's work conditions, women's stress, and marital adjustment using path analysis. Findings from 86 2-earner couples with adolescents indicated support for spillover model in which women's work stress and global stress mediated link between their work conditions and their perceptions of marital adjustment.…

  9. A whale better adjusts the biosonar to ordered rather than to random changes in the echo parameters.

    PubMed

    Supin, Alexander Ya; Nachtigall, Paul E; Breese, Marlee

    2012-09-01

    A false killer whale's (Pseudorca crassidens) sonar clicks and auditory evoked potentials (AEPs) were recorded during echolocation with simulated echoes in two series of experiments. In the first, both the echo delay and transfer factor (which is the dB-ratio of the echo sound-pressure level to emitted pulse source level) were varied randomly from trial to trial until enough data were collected (random presentation). In the second, a combination of the echo delay and transfer factor was kept constant until enough data were collected (ordered presentation). The mean click level decreased with shortening the delay and increasing the transfer factor, more at the ordered presentation rather than at the random presentation. AEPs to the self-heard emitted clicks decreased with shortening the delay and increasing the echo level equally in both series. AEPs to echoes increased with increasing the echo level, little dependent on the echo delay at random presentations but much more dependent on delay with ordered presentations. So some adjustment of the whale's biosonar was possible without prior information about the echo parameters; however, the availability of prior information about echoes provided additional whale capabilities to adjust both the transmitting and receiving parts of the biosonar.

  10. Reliability of parameter estimation in respirometric models.

    PubMed

    Checchi, Nicola; Marsili-Libelli, Stefano

    2005-09-01

    When modelling a biochemical system, the fact that model parameters cannot be estimated exactly stimulates the definition of tests for checking unreliable estimates and design better experiments. The method applied in this paper is a further development from Marsili-Libelli et al. [2003. Confidence regions of estimated parameters for ecological systems. Ecol. Model. 165, 127-146.] and is based on the confidence regions computed with the Fisher or the Hessian matrix. It detects the influence of the curvature, representing the distortion of the model response due to its nonlinear structure. If the test is passed then the estimation can be considered reliable, in the sense that the optimisation search has reached a point on the error surface where the effect of nonlinearities is negligible. The test is used here for an assessment of respirometric model calibration, i.e. checking the experimental design and estimation reliability, with an application to real-life data in the ASM context. Only dissolved oxygen measurements have been considered, because this is a very popular experimental set-up in wastewater modelling. The estimation of a two-step nitrification model using batch respirometric data is considered, showing that the initial amount of ammonium-N and the number of data play a crucial role in obtaining reliable estimates. From this basic application other results are derived, such as the estimation of the combined yield factor and of the second step parameters, based on a modified kinetics and a specific nitrite experiment. Finally, guidelines for designing reliable experiments are provided.

  11. Constant-parameter capture-recapture models

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  12. Detailed analysis of charge transport in amorphous organic thin layer by multiscale simulation without any adjustable parameters

    PubMed Central

    Uratani, Hiroki; Kubo, Shosei; Shizu, Katsuyuki; Suzuki, Furitsu; Fukushima, Tatsuya; Kaji, Hironori

    2016-01-01

    Hopping-type charge transport in an amorphous thin layer composed of organic molecules is simulated by the combined use of molecular dynamics, quantum chemical, and Monte Carlo calculations. By explicitly considering the molecular structure and the disordered intermolecular packing, we reasonably reproduce the experimental hole and electron mobilities and their applied electric field dependence (Poole–Frenkel behaviour) without using any adjustable parameters. We find that the distribution of the density-of-states originating from the amorphous nature has a significant impact on both the mobilities and Poole–Frenkel behaviour. Detailed analysis is also provided to reveal the molecular-level origin of the charge transport, including the origin of Poole–Frenkel behaviour. PMID:28000728

  13. Detailed analysis of charge transport in amorphous organic thin layer by multiscale simulation without any adjustable parameters

    NASA Astrophysics Data System (ADS)

    Uratani, Hiroki; Kubo, Shosei; Shizu, Katsuyuki; Suzuki, Furitsu; Fukushima, Tatsuya; Kaji, Hironori

    2016-12-01

    Hopping-type charge transport in an amorphous thin layer composed of organic molecules is simulated by the combined use of molecular dynamics, quantum chemical, and Monte Carlo calculations. By explicitly considering the molecular structure and the disordered intermolecular packing, we reasonably reproduce the experimental hole and electron mobilities and their applied electric field dependence (Poole–Frenkel behaviour) without using any adjustable parameters. We find that the distribution of the density-of-states originating from the amorphous nature has a significant impact on both the mobilities and Poole–Frenkel behaviour. Detailed analysis is also provided to reveal the molecular-level origin of the charge transport, including the origin of Poole–Frenkel behaviour.

  14. CHAMP: Changepoint Detection Using Approximate Model Parameters

    DTIC Science & Technology

    2014-06-01

    detecting changes in the parameters and mod- els that generate observed data. Commonly cited examples include detecting changes in stock market behavior [4...experimentally verified using artifi- cially generated data and are compared to those of Fearnhead and Liu [5]. 2 Related work Hidden Markov Models ( HMMs ) are...largely the de facto tool of choice when analyzing time series data, but the standard HMM formulation has several undesirable properties. The number of

  15. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    NASA Astrophysics Data System (ADS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  16. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    SciTech Connect

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V. Tkachenko, N. P.

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  17. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  18. Examining Competing Models of the Associations among Peer Victimization, Adjustment Problems, and School Connectedness

    ERIC Educational Resources Information Center

    Loukas, Alexandra; Ripperger-Suhler, Ken G.; Herrera, Denise E.

    2012-01-01

    The present study tested two competing models to assess whether psychosocial adjustment problems mediate the associations between peer victimization and school connectedness one year later, or if peer victimization mediates the associations between psychosocial adjustment problems and school connectedness. Participants were 500 10- to 14-year-old…

  19. Parental Support, Coping Strategies, and Psychological Adjustment: An Integrative Model with Late Adolescents.

    ERIC Educational Resources Information Center

    Holahan, Charles J.; And Others

    1995-01-01

    An integrative predictive model was applied to responses of 241 college freshmen to examine interrelationships among parental support, adaptive coping strategies, and psychological adjustment. Social support from both parents and a nonconflictual parental relationship were positively associated with adolescents' psychological adjustment. (SLD)

  20. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    USGS Publications Warehouse

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  1. Model parameters for simulation of physiological lipids

    PubMed Central

    McGlinchey, Nicholas

    2016-01-01

    Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972

  2. Moose models with vanishing S parameter

    SciTech Connect

    Casalbuoni, R.; De Curtis, S.; Dominici, D.

    2004-09-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2){sub L} and U(1){sub Y} at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric.

  3. The relationship of values to adjustment in illness: a model for nursing practice.

    PubMed

    Harvey, R M

    1992-04-01

    This paper proposes a model of the relationship between values, in particular health value, and adjustment to illness. The importance of values as well as the need for value change are described in the literature related to adjustment to physical disability and chronic illness. An empirical model, however, that explains the relationship of values to adjustment or adaptation has not been found by this researcher. Balance theory and its application to the abstract and perceived cognitions of health value and health perception are described here to explain the relationship of values like health value to outcomes associated with adjustment or adaptation to illness. The proposed model is based on the balance theories of Heider, Festinger and Feather. Hypotheses based on the model were tested and supported in a study of 100 adults with visible and invisible chronic illness. Nursing interventions based on the model are described and suggestions for further research discussed.

  4. An impaired driver model for safe driving by control of vehicle parameters

    NASA Astrophysics Data System (ADS)

    Phuc Le, Thanh; Erdem Sahin, Davut; Stiharu, Ion

    2013-03-01

    This paper presents the results of the investigation on a driver model that can be adjusted to perform the role of an impaired driver (especially, an alcohol-affected driver) who exhibits the deterioration in driving skills in correlation with a specific level of impairment. The linear vehicle model providing lateral displacement and yaw is coupled with the driver model that is derived as a linear quadratic regulator with delay. The decrement of performance is modelled by decreasing parameters that are preview time, visual perception, control gains and increasing reaction time. By comparing the standard deviation of the lateral position between the model and the real driver, the performance of the driver model impaired at the blood alcohol concentrations of 0.05%, 0.08% and 0.11% results in deteriorations of 21%, 26% and 30%, respectively. The lateral error is reduced if the vehicle parameters are adjusted to adapt to the impaired driver model.

  5. Development of a winter wheat adjustable crop calendar model

    NASA Technical Reports Server (NTRS)

    Baker, J. R. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. After parameter estimation, tests were conducted with variances from the fits, and on independent data. From these tests, it was generally concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson's triquadratic form, in general use for spring wheat, was found to show promise for winter wheat, but special techniques and care were required for its use. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with daily environmental values as independent variables.

  6. A New Climate Adjustment Tool: An update to EPA’s Storm Water Management Model

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations.

  7. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  8. A Disequilibrium Adjustment Mechanism for CPE Macroeconometric Models: Initial Testing on SOVMOD.

    DTIC Science & Technology

    1979-02-01

    062 1H0 SRI INTERNATIONAL ARLINGTON VA STRATEGIC STUDIES CENTER F/ T 5/3 DISEQUILIBRIUM ADJUSTMENT MECHANISM FOR CPE MACROECOAI0AIETRIC -E (U) FEB...wC) u Approved for Review Distribution: 0 Richard B. Foster, Director Strategic Studies Center Approved for public release; distribution unlimited...describes work on the model aimed at facilitating the integration of a disequilibrium adjustment mechanism into the macroeconometric model. The

  9. Modeling of an Adjustable Beam Solid State Light Project

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  10. Spherical Model Integrating Academic Competence with Social Adjustment and Psychopathology.

    ERIC Educational Resources Information Center

    Schaefer, Earl S.; And Others

    This study replicates and elaborates a three-dimensional, spherical model that integrates research findings concerning social and emotional behavior, psychopathology, and academic competence. Kindergarten teachers completed an extensive set of rating scales on 100 children, including the Classroom Behavior Inventory and the Child Adaptive Behavior…

  11. Multiscale modeling of failure in composites under model parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Bogdanor, Michael J.; Oskay, Caglar; Clay, Stephen B.

    2015-09-01

    This manuscript presents a multiscale stochastic failure modeling approach for fiber reinforced composites. A homogenization based reduced-order multiscale computational model is employed to predict the progressive damage accumulation and failure in the composite. Uncertainty in the composite response is modeled at the scale of the microstructure by considering the constituent material (i.e., matrix and fiber) parameters governing the evolution of damage as random variables. Through the use of the multiscale model, randomness at the constituent scale is propagated to the scale of the composite laminate. The probability distributions of the underlying material parameters are calibrated from unidirectional composite experiments using a Bayesian statistical approach. The calibrated multiscale model is exercised to predict the ultimate tensile strength of quasi-isotropic open-hole composite specimens at various loading rates. The effect of random spatial distribution of constituent material properties on the composite response is investigated.

  12. An evaluation of bias in propensity score-adjusted non-linear regression models.

    PubMed

    Wan, Fei; Mitra, Nandita

    2016-04-19

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  13. Fixing the c Parameter in the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…

  14. Comparison of the Properties of Regression and Categorical Risk-Adjustment Models

    PubMed Central

    Averill, Richard F.; Muldoon, John H.; Hughes, John S.

    2016-01-01

    Clinical risk-adjustment, the ability to standardize the comparison of individuals with different health needs, is based upon 2 main alternative approaches: regression models and clinical categorical models. In this article, we examine the impact of the differences in the way these models are constructed on end user applications. PMID:26945302

  15. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  16. The development of a risk-adjusted capitation payment system: the Maryland Medicaid model.

    PubMed

    Weiner, J P; Tucker, A M; Collins, A M; Fakhraei, H; Lieberman, R; Abrams, C; Trapnell, G R; Folkemer, J G

    1998-10-01

    This article describes the risk-adjusted payment methodology employed by the Maryland Medicaid program to pay managed care organizations. It also presents an empirical simulation analysis using claims data from 230,000 Maryland Medicaid recipients. This simulation suggests that the new payment model will help adjust for adverse or favorable selection. The article is intended for a wide audience, including state and national policy makers concerned with the design of managed care Medicaid programs and actuaries, analysts, and researchers involved in the design and implementation of risk-adjusted capitation payment systems.

  17. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    PubMed Central

    Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao

    2016-01-01

    With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified. PMID:27706076

  18. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel.

    PubMed

    Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao

    2016-10-01

    With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  19. On the hydrologic adjustment of climate-model projections: The potential pitfall of potential evapotranspiration

    USGS Publications Warehouse

    Milly, P.C.D.; Dunne, K.A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.

  20. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  1. On the Hydrologic Adjustment of Climate-Model Projections: The Potential Pitfall of Potential Evapotranspiration

    USGS Publications Warehouse

    Milly, Paul C.D.; Dunne, Krista A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median -11%) caused by the hydrologic model’s apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen–Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors’ findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climate-change impacts on water.

  2. Transfer function modeling of damping mechanisms in distributed parameter models

    NASA Technical Reports Server (NTRS)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  3. The combined geodetic network adjusted on the reference ellipsoid - a comparison of three functional models for GNSS observations

    NASA Astrophysics Data System (ADS)

    Kadaj, Roman

    2016-12-01

    The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS

  4. Use of Factorial Analysis to Determine the Interaction Between Parameters of a Land Surface Model

    NASA Astrophysics Data System (ADS)

    Varejão, C. G.; Varejão, E. V.; Costa, M. H.

    2007-05-01

    Land surface models use several parameters to represent biophysical processes. These parameters frequently are unknown, reproducing with uncertainty the characteristics of the ecosystem in study. Model calibration techniques find values for each parameter that reduce uncertainty. However, the calibration process is computationally expensive, since is necessary a lot of model runs to have their parameters adjusted. The more parameters are considered, more difficult the process is, particularly when there are interactions among them, and a modification in a parameter value implies in the change of the optimum value of the other parameters. The use of a factorial experiment allows the identification of possible inert parameters, whose values do not influence the final result of the experiment and, therefore, could be excluded from the calibration process. In this work we used factorial analysis to verify the existence of interaction among 5 parameters of the land surface IBIS model - Beta2 (distribution of fine roots), Vmax (maximum Rubisco enzyme capacity), m (coefficient related to the stomatal conductance), CHS (heat capacity of stems) and CHU (heat capacity of leaves) - evaluated against the output fluxes Rn (net radiation), H (sensible heat flux), LE (latent heat flux) and NEE (net ecosystem CO2 exchange). Data was collected at the Amazon tropical rainforest site known as K83, near Santarem, Brazil. The knowledge of the existing interactions between the parameters can considerably reduce the computational cost of further optimization processes, since each parameter that does not interact with others should be optimized independently.

  5. Assessment and indirect adjustment for confounding by smoking in cohort studies using relative hazards models.

    PubMed

    Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R

    2014-11-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.

  6. A model of parental representations, second individuation, and psychological adjustment in late adolescence.

    PubMed

    Boles, S A

    1999-04-01

    This study examined the role that mental representations and the second individuation process play in adjustment during late adolescence. Participants between the ages of 18 and 22 were used to test a theoretical model exploring the various relationships among the following latent variables: Parental Representations, Psychological Differentiation, Psychological Dependence, Positive Adjustment, and Maladjustment. The results indicated that the quality of parental representations facilitates the second individuation process, which in turn facilitates psychological adjustment in late adolescence. Furthermore, the results indicated that the second individuation process mediates the influence that the quality of parental representations have on psychological adjustment in late adolescence. These findings are discussed in light of previous research in this area, and clinical implications and suggestions for future research are offered.

  7. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    ERIC Educational Resources Information Center

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  8. A Model of Divorce Adjustment for Use in Family Service Agencies.

    ERIC Educational Resources Information Center

    Faust, Ruth Griffith

    1987-01-01

    Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…

  9. Re-estimating temperature-dependent consumption parameters in bioenergetics models for juvenile Chinook salmon

    USGS Publications Warehouse

    Plumb, John M.; Moffitt, Christine M.

    2015-01-01

    Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.

  10. Improvement of hydrological model calibration by selecting multiple parameter ranges

    NASA Astrophysics Data System (ADS)

    Wu, Qiaofeng; Liu, Shuguang; Cai, Yi; Li, Xinjian; Jiang, Yangming

    2017-01-01

    The parameters of hydrological models are usually calibrated to achieve good performance, owing to the highly non-linear problem of hydrology process modelling. However, parameter calibration efficiency has a direct relation with parameter range. Furthermore, parameter range selection is affected by probability distribution of parameter values, parameter sensitivity, and correlation. A newly proposed method is employed to determine the optimal combination of multi-parameter ranges for improving the calibration of hydrological models. At first, the probability distribution was specified for each parameter of the model based on genetic algorithm (GA) calibration. Then, several ranges were selected for each parameter according to the corresponding probability distribution, and subsequently the optimal range was determined by comparing the model results calibrated with the different selected ranges. Next, parameter correlation and sensibility were evaluated by quantifying two indexes, RC Y, X and SE, which can be used to coordinate with the negatively correlated parameters to specify the optimal combination of ranges of all parameters for calibrating models. It is shown from the investigation that the probability distribution of calibrated values of any particular parameter in a Xinanjiang model approaches a normal or exponential distribution. The multi-parameter optimal range selection method is superior to the single-parameter one for calibrating hydrological models with multiple parameters. The combination of optimal ranges of all parameters is not the optimum inasmuch as some parameters have negative effects on other parameters. The application of the proposed methodology gives rise to an increase of 0.01 in minimum Nash-Sutcliffe efficiency (ENS) compared with that of the pure GA method. The rising of minimum ENS with little change of the maximum may shrink the range of the possible solutions, which can effectively reduce uncertainty of the model performance.

  11. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  12. Testing a developmental cascade model of adolescent substance use trajectories and young adult adjustment

    PubMed Central

    LYNNE-LANDSMAN, SARAH D.; BRADSHAW, CATHERINE P.; IALONGO, NICHOLAS S.

    2013-01-01

    Developmental models highlight the impact of early risk factors on both the onset and growth of substance use, yet few studies have systematically examined the indirect effects of risk factors across several domains, and at multiple developmental time points, on trajectories of substance use and adult adjustment outcomes (e.g., educational attainment, mental health problems, criminal behavior). The current study used data from a community epidemiologically defined sample of 678 urban, primarily African American youth, followed from first grade through young adulthood (age 21) to test a developmental cascade model of substance use and young adult adjustment outcomes. Drawing upon transactional developmental theories and using growth mixture modeling procedures, we found evidence for a developmental progression from behavioral risk to adjustment problems in the peer context, culminating in a high-risk trajectory of alcohol, cigarette, and marijuana use during adolescence. Substance use trajectory membership was associated with adjustment in adulthood. These findings highlight the developmental significance of early individual and interpersonal risk factors on subsequent risk for substance use and, in turn, young adult adjustment outcomes. PMID:20883591

  13. Contact angle adjustment in equation-of-state-based pseudopotential model

    NASA Astrophysics Data System (ADS)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  14. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  15. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  16. A local adjustment strategy for the initialization of dynamic causal modelling to infer effective connectivity in brain epileptic structures.

    PubMed

    Xiang, Wentao; Karfoul, Ahmad; Shu, Huazhong; Le Bouquin Jeannès, Régine

    2017-03-07

    This paper addresses the question of effective connectivity in the human cerebral cortex in the context of epilepsy. Among model based approaches to infer brain connectivity, spectral Dynamic Causal Modelling is a conventional technique for which we propose an alternative to estimate cross spectral density. The proposed strategy we investigated tackles the sub-estimation of the free energy using the well-known variational Expectation-Maximization algorithm highly sensitive to the initialization of the parameters vector by a permanent local adjustment of the initialization process. The performance of the proposed strategy in terms of effective connectivity identification is assessed using simulated data generated by a neuronal mass model (simulating unidirectional and bidirectional flows) and real epileptic intracerebral Electroencephalographic signals. Results show the efficiency of proposed approach compared to the conventional Dynamic Causal Modelling and the one wherein a deterministic annealing scheme is employed.

  17. Testing a Social Ecological Model for Relations between Political Violence and Child Adjustment in Northern Ireland

    PubMed Central

    Cummings, E. Mark; Merrilees, Christine E.; Schermerhorn, Alice C.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2013-01-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family and child psychological processes in child adjustment, supporting study of inter-relations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M= 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland completed measures of community discord, family relations, and children’s regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members’ reports of current sectarian and non-sectarian antisocial behavior. Interparental conflict and parental monitoring and children’s emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children’s adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world. PMID:20423550

  18. Testing a social ecological model for relations between political violence and child adjustment in Northern Ireland.

    PubMed

    Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-05-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.

  19. Bayesian approach to decompression sickness model parameter estimation.

    PubMed

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.

  20. Two Models of Caregiver Strain and Bereavement Adjustment: A Comparison of Husband and Daughter Caregivers of Breast Cancer Hospice Patients

    ERIC Educational Resources Information Center

    Bernard, Lori L.; Guarnaccia, Charles A.

    2003-01-01

    Purpose: Caregiver bereavement adjustment literature suggests opposite models of impact of role strain on bereavement adjustment after care-recipient death--a Complicated Grief Model and a Relief Model. This study tests these competing models for husband and adult-daughter caregivers of breast cancer hospice patients. Design and Methods: This…

  1. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  2. A Study of Perfectionism, Attachment, and College Student Adjustment: Testing Mediational Models.

    ERIC Educational Resources Information Center

    Hood, Camille A.; Kubal, Anne E.; Pfaller, Joan; Rice, Kenneth G.

    Mediational models predicting college students' adjustment were tested using regression analyses. Contemporary adult attachment theory was employed to explore the cognitive/affective mechanisms by which adult attachment and perfectionism affect various aspects of psychological functioning. Consistent with theoretical expectations, results…

  3. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    ERIC Educational Resources Information Center

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  4. Towards an Integrated Conceptual Model of International Student Adjustment and Adaptation

    ERIC Educational Resources Information Center

    Schartner, Alina; Young, Tony Johnstone

    2016-01-01

    Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…

  5. Distributed parameter modeling of repeated truss structures

    NASA Technical Reports Server (NTRS)

    Wang, Han-Ching

    1994-01-01

    A new approach to find homogeneous models for beam-like repeated flexible structures is proposed which conceptually involves two steps. The first step involves the approximation of 3-D non-homogeneous model by a 1-D periodic beam model. The structure is modeled as a 3-D non-homogeneous continuum. The displacement field is approximated by Taylor series expansion. Then, the cross sectional mass and stiffness matrices are obtained by energy equivalence using their additive properties. Due to the repeated nature of the flexible bodies, the mass, and stiffness matrices are also periodic. This procedure is systematic and requires less dynamics detail. The first step involves the homogenization from a 1-D periodic beam model to a 1-D homogeneous beam model. The periodic beam model is homogenized into an equivalent homogeneous beam model using the additive property of compliance along the generic axis. The major departure from previous approaches in literature is using compliance instead of stiffness in homogenization. An obvious justification is that the stiffness is additive at each cross section but not along the generic axis. The homogenized model preserves many properties of the original periodic model.

  6. Estimation of Parameters in Latent Class Models with Constraints on the Parameters.

    DTIC Science & Technology

    1986-06-01

    the item parameters. Let us briefly review the elements of latent class models. The reader desiring a thorough introduction can consult Lazarsfeld and...parameters, including most of the models which have been proposed to date. The latent distance model of Lazarsfeld and Henry (1968) and the quasi...Psychometrika, 1964, 29, 115-129. Lazarsfeld , P.F., and Henry, N.W. Latent structure analysis. Boston: Houghton-Mifflin, 1968. L6. - 29 References continued

  7. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables

    PubMed Central

    Abad, Cesar C. C.; Barros, Ronaldo V.; Bertuzzi, Romulo; Gagliardi, João F. L.; Lima-Silva, Adriano E.; Lambert, Mike I.

    2016-01-01

    Abstract The aim of this study was to verify the power of VO2max, peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO2max and PTV; 2) a constant submaximal run at 12 km·h−1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO2max, PTV and RE) and adjusted variables (VO2max0.72, PTV0.72 and RE0.60) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO2max. Significant correlations (p < 0.01) were found between 10 km running time and adjusted and unadjusted RE and PTV, providing models with effect size > 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV0.72 and RE0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation. PMID:28149382

  8. Parameter redundancy in discrete state‐space and integrated models

    PubMed Central

    McCrea, Rachel S.

    2016-01-01

    Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826

  9. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  10. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    SciTech Connect

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  11. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  12. Equating Parameter Estimates from the Generalized Graded Unfolding Model.

    ERIC Educational Resources Information Center

    Roberts, James S.

    Three common methods for equating parameter estimates from binary item response theory models are extended to the generalized grading unfolding model (GGUM). The GGUM is an item response model in which single-peaked, nonmonotonic expected value functions are implemented for polytomous responses. GGUM parameter estimates are equated using extended…

  13. Improving the global applicability of the RUSLE model - adjustment of the topographical and rainfall erosivity factors

    NASA Astrophysics Data System (ADS)

    Naipal, V.; Reick, C.; Pongratz, J.; Van Oost, K.

    2015-09-01

    Large uncertainties exist in estimated rates and the extent of soil erosion by surface runoff on a global scale. This limits our understanding of the global impact that soil erosion might have on agriculture and climate. The Revised Universal Soil Loss Equation (RUSLE) model is, due to its simple structure and empirical basis, a frequently used tool in estimating average annual soil erosion rates at regional to global scales. However, large spatial-scale applications often rely on coarse data input, which is not compatible with the local scale on which the model is parameterized. Our study aims at providing the first steps in improving the global applicability of the RUSLE model in order to derive more accurate global soil erosion rates. We adjusted the topographical and rainfall erosivity factors of the RUSLE model and compared the resulting erosion rates to extensive empirical databases from the USA and Europe. By scaling the slope according to the fractal method to adjust the topographical factor, we managed to improve the topographical detail in a coarse resolution global digital elevation model. Applying the linear multiple regression method to adjust rainfall erosivity for various climate zones resulted in values that compared well to high resolution erosivity data for different regions. However, this method needs to be extended to tropical climates, for which erosivity is biased due to the lack of high resolution erosivity data. After applying the adjusted and the unadjusted versions of the RUSLE model on a global scale we find that the adjusted version shows a global higher mean erosion rate and more variability in the erosion rates. Comparison to empirical data sets of the USA and Europe shows that the adjusted RUSLE model is able to decrease the very high erosion rates in hilly regions that are observed in the unadjusted RUSLE model results. Although there are still some regional differences with the empirical databases, the results indicate that the

  14. Validating Mechanistic Sorption Model Parameters and Processes for Reactive Transport in Alluvium

    SciTech Connect

    Zavarin, M; Roberts, S K; Rose, T P; Phinney, D L

    2002-05-02

    The laboratory batch and flow-through experiments presented in this report provide a basis for validating the mechanistic surface complexation and ion exchange model we use in our hydrologic source term (HST) simulations. Batch sorption experiments were used to examine the effect of solution composition on sorption. Flow-through experiments provided for an analysis of the transport behavior of sorbing elements and tracers which includes dispersion and fluid accessibility effects. Analysis of downstream flow-through column fluids allowed for evaluation of weakly-sorbing element transport. Secondary Ion Mass Spectrometry (SIMS) analysis of the core after completion of the flow-through experiments permitted the evaluation of transport of strongly sorbing elements. A comparison between these data and model predictions provides additional constraints to our model and improves our confidence in near-field HST model parameters. In general, cesium, strontium, samarium, europium, neptunium, and uranium behavior could be accurately predicted using our mechanistic approach but only after some adjustment was made to the model parameters. The required adjustments included a reduction in strontium affinity for smectite, an increase in cesium affinity for smectite and illite, a reduction in iron oxide and calcite reactive surface area, and a change in clinoptilolite reaction constants to reflect a more recently published set of data. In general, these adjustments are justifiable because they fall within a range consistent with our understanding of the parameter uncertainties. These modeling results suggest that the uncertainty in the sorption model parameters must be accounted for to validate the mechanistic approach. The uncertainties in predicting the sorptive behavior of U-1a and UE-5n alluvium also suggest that these uncertainties must be propagated to nearfield HST and large-scale corrective action unit (CAU) models.

  15. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the

  16. Parameter estimation of hydrologic models using data assimilation

    NASA Astrophysics Data System (ADS)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  17. Isolating parameter sensitivity in reach scale transient storage modeling

    NASA Astrophysics Data System (ADS)

    Schmadel, Noah M.; Neilson, Bethany T.; Heavilin, Justin E.; Wörman, Anders

    2016-03-01

    Parameter sensitivity analyses, although necessary to assess identifiability, may not lead to an increased understanding or accurate representation of transient storage processes when associated parameter sensitivities are muted. Reducing the number of uncertain calibration parameters through field-based measurements may allow for more realistic representations and improved predictive capabilities of reach scale stream solute transport. Using a two-zone transient storage model, we examined the spatial detail necessary to set parameters describing hydraulic characteristics and isolate the sensitivity of the parameters associated with transient storage processes. We represented uncertain parameter distributions as triangular fuzzy numbers and used closed form statistical moment solutions to express parameter sensitivity thus avoiding copious model simulations. These solutions also allowed for the direct incorporation of different levels of spatial information regarding hydraulic characteristics. To establish a baseline for comparison, we performed a sensitivity analysis considering all model parameters as uncertain. Next, we set hydraulic parameters as the reach averages, leaving the transient storage parameters as uncertain, and repeated the analysis. Lastly, we incorporated high resolution hydraulic information assessed from aerial imagery to examine whether more spatial detail was necessary to isolate the sensitivity of transient storage parameters. We found that a reach-average hydraulic representation, as opposed to using detailed spatial information, was sufficient to highlight transient storage parameter sensitivity and provide more information regarding the potential identifiability of these parameters.

  18. Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…

  19. Transferability of calibrated microsimulation model parameters for safety assessment using simulated conflicts.

    PubMed

    Essa, Mohamed; Sayed, Tarek

    2015-11-01

    Several studies have investigated the relationship between field-measured conflicts and the conflicts obtained from micro-simulation models using the Surrogate Safety Assessment Model (SSAM). Results from recent studies have shown that while reasonable correlation between simulated and real traffic conflicts can be obtained especially after proper calibration, more work is still needed to confirm that simulated conflicts provide safety measures beyond what can be expected from exposure. As well, the results have emphasized that using micro-simulation model to evaluate safety without proper model calibration should be avoided. The calibration process adjusts relevant simulation parameters to maximize the correlation between field-measured and simulated conflicts. The main objective of this study is to investigate the transferability of calibrated parameters of the traffic simulation model (VISSIM) for safety analysis between different sites. The main purpose is to examine whether the calibrated parameters, when applied to other sites, give reasonable results in terms of the correlation between the field-measured and the simulated conflicts. Eighty-three hours of video data from two signalized intersections in Surrey, BC were used in this study. Automated video-based computer vision techniques were used to extract vehicle trajectories and identify field-measured rear-end conflicts. Calibrated VISSIM parameters obtained from the first intersection which maximized the correlation between simulated and field-observed conflicts were used to estimate traffic conflicts at the second intersection and to compare the results to parameters optimized specifically for the second intersection. The results show that the VISSIM parameters are generally transferable between the two locations as the transferred parameters provided better correlation between simulated and field-measured conflicts than using the default VISSIM parameters. Of the six VISSIM parameters identified as

  20. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    SciTech Connect

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.

  1. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    DOE PAGES

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; ...

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less

  2. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  3. Dynamics in the Parameter Space of a Neuron Model

    NASA Astrophysics Data System (ADS)

    Paulo, C. Rech

    2012-06-01

    Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.

  4. An appraisal-based coping model of attachment and adjustment to arthritis.

    PubMed

    Sirois, Fuschia M; Gick, Mary L

    2016-05-01

    Guided by pain-related attachment models and coping theory, we used structural equation modeling to test an appraisal-based coping model of how insecure attachment was linked to arthritis adjustment in a sample of 365 people with arthritis. The structural equation modeling analyses revealed indirect and direct associations of anxious and avoidant attachment with greater appraisals of disease-related threat, less perceived social support to deal with this threat, and less coping efficacy. There was evidence of reappraisal processes for avoidant but not anxious attachment. Findings highlight the importance of considering attachment style when assessing how people cope with the daily challenges of arthritis.

  5. Modelling the rate of change in a longitudinal study with missing data, adjusting for contact attempts.

    PubMed

    Akacha, Mouna; Hutton, Jane L

    2011-05-10

    The Collaborative Ankle Support Trial (CAST) is a longitudinal trial of treatments for severe ankle sprains in which interest lies in the rate of improvement, the effectiveness of reminders and potentially informative missingness. A model is proposed for continuous longitudinal data with non-ignorable or informative missingness, taking into account the nature of attempts made to contact initial non-responders. The model combines a non-linear mixed model for the outcome model with logistic regression models for the reminder processes. A sensitivity analysis is used to contrast this model with the traditional selection model, where we adjust for missingness by modelling the missingness process. The conclusions that recovery is slower, and less satisfactory with age and more rapid with below knee cast than with a tubular bandage do not alter materially across all models investigated. The results also suggest that phone calls are most effective in retrieving questionnaires.

  6. Statistical Parameters for Describing Model Accuracy

    DTIC Science & Technology

    1989-03-20

    mean and the standard deviation, approximately characterizes the accuracy of the model, since the width of the confidence interval whose center is at...Using a modified version of Chebyshev’s inequality, a similar result is obtained for the upper bound of the confidence interval width for any

  7. State and Parameter Estimation for a Coupled Ocean--Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Ghil, M.; Kondrashov, D.; Sun, C.

    2006-12-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  8. A simulation of water pollution model parameter estimation

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  9. A Logical Difficulty of the Parameter Setting Model.

    ERIC Educational Resources Information Center

    Sasaki, Yoshinori

    1990-01-01

    Seeks to prove that the parameter setting model (PSM) of Chomsky's Universal Grammar theory contains an internal contradiction when it is seriously taken to model the internal state of language learners. (six references) (JL)

  10. Determining extreme parameter correlation in ground water models.

    USGS Publications Warehouse

    Hill, M.C.; Osterby, O.

    2003-01-01

    In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation can go undetected even by experienced modelers. Extreme parameter correlation can be detected using parameter correlation coefficients, but their utility depends on the presence of sufficient, but not excessive, numerical imprecision of the sensitivities, such as round-off error. This work investigates the information that can be obtained from parameter correlation coefficients in the presence of different levels of numerical imprecision, and compares it to the information provided by an alternative method called the singular value decomposition (SVD). Results suggest that (1) calculated correlation coefficients with absolute values that round to 1.00 were good indicators of extreme parameter correlation, but smaller values were not necessarily good indicators of lack of correlation and resulting unique parameter estimates; (2) the SVD may be more difficult to interpret than parameter correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters were more equally sensitive. When the statistical measures fail, parameter correlation can be identified only by the tedious process of executing regression using different sets of starting values, or, in some circumstances, through graphs of the objective function.

  11. Exploring the interdependencies between parameters in a material model.

    SciTech Connect

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  12. On Interpreting the Parameters for Any Item Response Model

    ERIC Educational Resources Information Center

    Thissen, David

    2009-01-01

    Maris and Bechger's article is an exercise in technical virtuosity and provides much to be learned by students of psychometrics. In this commentary, the author begins with making two observations. The first is that the title, "On Interpreting the Model Parameters for the Three Parameter Logistic Model," belies the generality of parts of Maris and…

  13. Solar Parameters for Modeling the Interplanetary Background

    NASA Astrophysics Data System (ADS)

    Bzowski, Maciej; Sokół, Justyna M.; Tokumaru, Munetoshi; Fujiki, Kenichi; Quémerais, Eric; Lallement, Rosine; Ferron, Stéphane; Bochsler, Peter; McComas, David J.

    The goal of the working group on cross-calibration of past and present ultraviolet (UV) datasets of the International Space Science Institute (ISSI) in Bern, Switzerland was to establish a photometric cross-calibration of various UV and extreme ultraviolet (EUV) heliospheric observations. Realization of this goal required a credible and up-to-date model of the spatial distribution of neutral interstellar hydrogen in the heliosphere, and to that end, a credible model of the radiation pressure and ionization processes was needed. This chapter describes the latter part of the project: the solar factors responsible for shaping the distribution of neutral interstellar H in the heliosphere. In this paper we present the solar Lyman-α flux and the topics of solar Lyman-α resonant radiation pressure force acting on neutral H atoms in the heliosphere. We will also discuss solar EUV radiation and resulting photoionization of heliospheric hydrogen along with their evolution in time and the still hypothetical variation with heliolatitude. Furthermore, solar wind and its evolution with solar activity is presented, mostly in the context of charge exchange ionization of heliospheric neutral hydrogen, and dynamic pressure variations. Also electron-impact ionization of neutral heliospheric hydrogen and its variation with time, heliolatitude, and solar distance is discussed. After a review of the state of the art in all of those topics, we proceed to present an interim model of the solar wind and the other solar factors based on up-to-date in situ and remote sensing observations. This model was used by Izmodenov et al. (2013, this volume) to calculate the distribution of heliospheric hydrogen, which in turn was the basis for intercalibrating the heliospheric UV and EUV measurements discussed in Quémerais et al. (2013, this volume). Results of this joint effort will also be used to improve the model of the solar wind evolution, which will be an invaluable asset in interpretation of

  14. Parameter-adjusted stochastic resonance system for the aperiodic echo chirp signal in optimal FrFT domain

    NASA Astrophysics Data System (ADS)

    Lin, Li-feng; Yu, Lei; Wang, Huiqi; Zhong, Suchuan

    2017-02-01

    In order to improve the system performance for moving target detection and localization, this paper presents a new aperiodic chirp signal and additive noise driving stochastic dynamical system, in which the internal frequency has the linear variation matching with the driving frequency. By using the fractional Fourier transform (FrFT) operator with the optimal order, the proposed time-domain dynamical system is transformed into the equivalent FrFT-domain system driven by the periodic signal and noise. Therefore, system performance is conveniently analyzed from the view of output signal-to-noise ratio (SNR) in optimal FrFT domain. Simulation results demonstrate that the output SNR, as a function of system parameter, shows the different generalized SR behaviors in the case of various internal parameters of driving chirp signal and external parameters of the moving target.

  15. The HHS-HCC Risk Adjustment Model for Individual and Small Group Markets under the Affordable Care Act

    PubMed Central

    Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia

    2014-01-01

    Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula. PMID:25360387

  16. Modelling goal adjustment in social relationships: Two experimental studies with children and adults.

    PubMed

    Thomsen, Tamara; Kappes, Cathleen; Schwerdt, Laura; Sander, Johanna; Poller, Charlotte

    2016-10-23

    In two experiments, we investigated observational learning in social relationships as one possible pathway to the development of goal adjustment processes. In the first experiment, 56 children (M = 9.29 years) observed their parent as a model; in the second, 50 adults (M = 32.27 years) observed their romantic partner. Subjects were randomly assigned to three groups: goal engagement (GE), goal disengagement (GD), or control group (CO) and were asked to solve (unsolvable) puzzles. Before trying to solve the puzzles by themselves, subjects observed the instructed model, who was told to continue with the same puzzle (GE) or to switch to the next puzzle (GD). Results show that children in the GE group switched significantly less than in the GD or CO group. There was no difference between the GD group and CO group. Adults in the GE group switched significantly less than in the GD or CO group, whereas subjects in the GD group switched significantly more often than the CO group. Statement of contribution What is already known on this subject? Previous research focused mainly on the functions of goal adjustment processes. It rarely considered processes and conditions that contribute to the development of goal engagement and goal disengagement. There are only two cross-sectional studies that directly investigate this topic. Previous research that claims observational learning as a pathway of learning emotion regulation or adjustment processes has (only) relied on correlational methods and, thus, do not allow any causal interpretations. Previous research, albeit claiming a life span focus, mostly investigated goal adjustment processes in one specific age group (mainly adults). There is no study that investigates the same processes in different age groups. What does this study add? In our two studies, we focus on the conditions of goal adjustment processes and sought to demonstrate one potential pathway of learning or changing the application of goal adjustment

  17. Army Physical Therapy Productivity According to the Performance Based Adjustment Model

    DTIC Science & Technology

    2008-05-02

    FTE) data from 34 military treatment facilities (MTFs). Results: Statistical process control identified extensive special cause variation in Army PT... Treatment Facility (MTF) efficiency with specialty specific productivity benchmarks established by the Performance Based Adjustment Model (PBAM...generates 1.2 RVUs and a 15-minute ultrasound treatment generates .21 RVUs of workload. See Appendix A for a list of commonly used physical therapy

  18. Stress and Personal Resource as Predictors of the Adjustment of Parents to Autistic Children: A Multivariate Model

    ERIC Educational Resources Information Center

    Siman-Tov, Ayelet; Kaniel, Shlomo

    2011-01-01

    The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…

  19. Identification of hydrological model parameter variation using ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao

    2016-12-01

    Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.

  20. Relationship between Cole-Cole model parameters and spectral decomposition parameters derived from SIP data

    NASA Astrophysics Data System (ADS)

    Weigand, M.; Kemna, A.

    2016-06-01

    Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.

  1. Universally sloppy parameter sensitivities in systems biology models.

    PubMed

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  2. The definition of hydrologic model parameters using remote sensing techniques

    NASA Technical Reports Server (NTRS)

    Ragan, R. M.; Salomonson, V. V.

    1978-01-01

    The reported investigation is concerned with the use of Landsat remote sensing to define input parameters for an array of hydrologic models which are used to synthesize streamflow and water quality parameters in the planning or management process. The ground truth sampling and problems involved in translating the remotely sensed information into hydrologic model parameters are discussed. Questions related to the modification of existing models for compatibility with remote sensing capabilities are also examined. It is shown that the input parameters of many models are presently overdefined in terms of the sensitivity and accuracy of the model. When this overdefinition is recognized many of the models currently considered to be incompatible with remote sensing capabilities can be modified to make possible use with sensors having rather low resolutions.

  3. State and parameter estimation for canonic models of neural oscillators.

    PubMed

    Tyukin, Ivan; Steur, Erik; Nijmeijer, Henk; Fairhurst, David; Song, Inseon; Semyanov, Alexey; Van Leeuwen, Cees

    2010-06-01

    We consider the problem of how to recover the state and parameter values of typical model neurons, such as Hindmarsh-Rose, FitzHugh-Nagumo, Morris-Lecar, from in-vitro measurements of membrane potentials. In control theory, in terms of observer design, model neurons qualify as locally observable. However, unlike most models traditionally addressed in control theory, no parameter-independent diffeomorphism exists, such that the original model equations can be transformed into adaptive canonic observer form. For a large class of model neurons, however, state and parameter reconstruction is possible nevertheless. We propose a method which, subject to mild conditions on the richness of the measured signal, allows model parameters and state variables to be reconstructed up to an equivalence class.

  4. Evaluation of the storage function model parameter characteristics

    NASA Astrophysics Data System (ADS)

    Sugiyama, Hironobu; Kadoya, Mutsumi; Nagai, Akihiro; Lansey, Kevin

    1997-04-01

    The storage function hydrograph model is one of the most commonly used models for flood runoff analysis in Japan. This paper studies the generality of the approach and its application to Japanese basins. Through a comparison of the basic equations for the models, the storage function model parameters, K, P, and T1, are shown to be related to the terms, k and p, in the kinematic wave model. This analysis showed that P and p are identical and K and T1 can be related to k, the basin area and its land use. To apply the storage function model throughout Japan, regional parameter relationships for K and T1 were developed for different land-use conditions using data from 22 watersheds and 91 flood events. These relationships combine the kinematic wave parameters with general topographic information using Hack's Law. The sensitivity of the parameters and their physical significance are also described.

  5. Extraction of exposure modeling parameters of thick resist

    NASA Astrophysics Data System (ADS)

    Liu, Chi; Du, Jinglei; Liu, Shijie; Duan, Xi; Luo, Boliang; Zhu, Jianhua; Guo, Yongkang; Du, Chunlei

    2004-12-01

    Experimental and theoretical analysis indicates that many nonlinear factors existing in the exposure process of thick resist can remarkably affect the PAC concentration distribution in the resist. So the effects should be fully considered in the exposure model of thick resist, and exposure parameters should not be treated as constants because there exists certain relationship between the parameters and resist thickness. In this paper, an enhanced Dill model for the exposure process of thick resist is presented, and the experimental setup for measuring exposure parameters of thick resist is developed. We measure the intensity transmittance curve of thick resist AZ4562 under different processing conditions, and extract the corresponding exposure parameters based on the experiment results and the calculations from the beam propagation matrix of the resist films. With these modified modeling parameters and enhanced Dill model, simulation of thick-resist exposure process can be effectively developed in the future.

  6. [Applying temporally-adjusted land use regression models to estimate ambient air pollution exposure during pregnancy].

    PubMed

    Zhang, Y J; Xue, F X; Bai, Z P

    2017-03-06

    The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.

  7. Inverse estimation of parameters for an estuarine eutrophication model

    SciTech Connect

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.

  8. Interfacial free energy adjustable phase field crystal model for homogeneous nucleation.

    PubMed

    Guo, Can; Wang, Jincheng; Wang, Zhijun; Li, Junjie; Guo, Yaolin; Huang, Yunhao

    2016-05-18

    To describe the homogeneous nucleation process, an interfacial free energy adjustable phase-field crystal model (IPFC) was proposed by reconstructing the energy functional of the original phase field crystal (PFC) methodology. Compared with the original PFC model, the additional interface term in the IPFC model effectively can adjust the magnitude of the interfacial free energy, but does not affect the equilibrium phase diagram and the interfacial energy anisotropy. The IPFC model overcame the limitation that the interfacial free energy of the original PFC model is much less than the theoretical results. Using the IPFC model, we investigated some basic issues in homogeneous nucleation. From the viewpoint of simulation, we proceeded with an in situ observation of the process of cluster fluctuation and obtained quite similar snapshots to colloidal crystallization experiments. We also counted the size distribution of crystal-like clusters and the nucleation rate. Our simulations show that the size distribution is independent of the evolution time, and the nucleation rate remains constant after a period of relaxation, which are consistent with experimental observations. The linear relation between logarithmic nucleation rate and reciprocal driving force also conforms to the steady state nucleation theory.

  9. Improvements in simulation of atmospheric boundary layer parameters through data assimilation in ARPS mesoscale atmospheric model

    NASA Astrophysics Data System (ADS)

    Subrahamanyam, D. Bala; Ramachandran, Radhika; Kunhikrishnan, P. K.

    2006-12-01

    In a broad sense, 'Data Assimilation' refers to a technique, whereby the realistic observational datasets are injected to a model simulation for bringing accurate forecasts. There are several schemes available for insertion of observational datasets in the model. In this piece of research, we present one of the simplest, yet powerful data assimilation techniques - known as nudging through optimal interpolation in the ARPS (Advanced Regional Prediction System) model. Through this technique, we firstly identify the assimilation window in space and time over which the observational datasets need to be inserted and the model products require to be adjusted. Appropriate model variables are then adjusted for the realistic observational datasets with a proper weightage being given to the observations. Incorporation of such a subroutine in the model that takes care of the assimilation in the model provides a powerful tool for improving the forecast parameters. Such a technique can be very useful in cases, where observational datasets are available at regular intervals. In this article, we demonstrate the effectiveness of this technique for simulation of profiles of Atmospheric Boundary Layer parameters for a tiny island of Kaashidhoo in the Republic of Maldives, where regular GPS Loran Atmospheric Soundings were carried out during the Intensive Field Phase of Indian Ocean Experiment (INDOEX, IFP-99).

  10. Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes

    DTIC Science & Technology

    2012-11-01

    Gulf of Mexico hurricanes show considerable differences between the resulting wind speeds and data. The differences are used to guide the development of adjustment factors to improve the wind fields resulting from the Rankine Vortex model. The corrected model shows a significant improvement in the shape, size, and wind speed contours for 14 out of 17 hurricanes examined. The effect on wave fields resulting from the original and modified wind fields are on the order of 4 m, which is important for the estimation of extreme wave

  11. Adjustment of the k-ω SST turbulence model for prediction of airfoil characteristics near stall

    NASA Astrophysics Data System (ADS)

    Matyushenko, A. A.; Garbaruk, A. V.

    2016-11-01

    A version of k-ra SST turbulence model adjusted for flow around airfoils at high Reynolds numbers is presented. The modified version decreases eddy viscosity and significantly improves the accuracy of prediction of aerodynamic characteristics in a wide range of angles of attack. However, considered reduction of eddy viscosity destroys calibration of the model, which leads to decreasing accuracy of skin-friction coefficient prediction even for relatively simple wall-bounded turbulent flows. Therefore, the area of applicability of the suggested modification is limited to flows around airfoils.

  12. Model and Parameter Discretization Impacts on Estimated ASR Recovery Efficiency

    NASA Astrophysics Data System (ADS)

    Forghani, A.; Peralta, R. C.

    2015-12-01

    We contrast computed recovery efficiency of one Aquifer Storage and Recovery (ASR) well using several modeling situations. Test situations differ in employed finite difference grid discretization, hydraulic conductivity, and storativity. We employ a 7-layer regional groundwater model calibrated for Salt Lake Valley. Since the regional model grid is too coarse for ASR analysis, we prepare two local models with significantly smaller discretization capable of analyzing ASR recovery efficiency. Some addressed situations employ parameters interpolated from the coarse valley model. Other situations employ parameters derived from nearby well logs or pumping tests. The intent of the evaluations and subsequent sensitivity analysis is to show how significantly the employed discretization and aquifer parameters affect estimated recovery efficiency. Most of previous studies to evaluate ASR recovery efficiency only consider hypothetical uniform specified boundary heads and gradient assuming homogeneous aquifer parameters. The well is part of the Jordan Valley Water Conservancy District (JVWCD) ASR system, that lies within Salt Lake Valley.

  13. Asymptotically Normal and Efficient Estimation of Covariate-Adjusted Gaussian Graphical Model

    PubMed Central

    Chen, Mengjie; Ren, Zhao; Zhao, Hongyu; Zhou, Harrison

    2015-01-01

    A tuning-free procedure is proposed to estimate the covariate-adjusted Gaussian graphical model. For each finite subgraph, this estimator is asymptotically normal and efficient. As a consequence, a confidence interval can be obtained for each edge. The procedure enjoys easy implementation and efficient computation through parallel estimation on subgraphs or edges. We further apply the asymptotic normality result to perform support recovery through edge-wise adaptive thresholding. This support recovery procedure is called ANTAC, standing for Asymptotically Normal estimation with Thresholding after Adjusting Covariates. ANTAC outperforms other methodologies in the literature in a range of simulation studies. We apply ANTAC to identify gene-gene interactions using an eQTL dataset. Our result achieves better interpretability and accuracy in comparison with CAMPE. PMID:27499564

  14. Computationally Inexpensive Identification of Non-Informative Model Parameters

    NASA Astrophysics Data System (ADS)

    Mai, J.; Cuntz, M.; Kumar, R.; Zink, M.; Samaniego, L. E.; Schaefer, D.; Thober, S.; Rakovec, O.; Musuuza, J. L.; Craven, J. R.; Spieler, D.; Schrön, M.; Prykhodko, V.; Dalmasso, G.; Langenberg, B.; Attinger, S.

    2014-12-01

    Sensitivity analysis is used, for example, to identify parameters which induce the largest variability in model output and are thus informative during calibration. Variance-based techniques are employed for this purpose, which unfortunately require a large number of model evaluations and are thus ineligible for complex environmental models. We developed, therefore, a computational inexpensive screening method, which is based on Elementary Effects, that automatically separates informative and non-informative model parameters. The method was tested using the mesoscale hydrologic model (mHM) with 52 parameters. The model was applied in three European catchments with different hydrological characteristics, i.e. Neckar (Germany), Sava (Slovenia), and Guadalquivir (Spain). The method identified the same informative parameters as the standard Sobol method but with less than 1% of model runs. In Germany and Slovenia, 22 of 52 parameters were informative mostly in the formulations of evapotranspiration, interflow and percolation. In Spain 19 of 52 parameters were informative with an increased importance of soil parameters. We showed further that Sobol' indexes calculated for the subset of informative parameters are practically the same as Sobol' indexes before the screening but the number of model runs was reduced by more than 50%. The model mHM was then calibrated twice in the three test catchments. First all 52 parameters were taken into account and then only the informative parameters were calibrated while all others are kept fixed. The Nash-Sutcliffe efficiencies were 0.87 and 0.83 in Germany, 0.89 and 0.88 in Slovenia, and 0.86 and 0.85 in Spain, respectively. This minor loss of at most 4% in model performance comes along with a substantial decrease of at least 65% in model evaluations. In summary, we propose an efficient screening method to identify non-informative model parameters that can be discarded during further applications. We have shown that sensitivity

  15. A dimensionless parameter model for arc welding processes

    SciTech Connect

    Fuerschbach, P.W.

    1994-12-31

    A dimensionless parameter model previously developed for C0{sub 2} laser beam welding has been shown to be applicable to GTAW and PAW autogenous arc welding processes. The model facilitates estimates of weld size, power, and speed based on knowledge of the material`s thermal properties. The dimensionless parameters can also be used to estimate the melting efficiency, which eases development of weld schedules with lower heat input to the weldment. The mathematical relationship between the dimensionless parameters in the model has been shown to be dependent on the heat flow geometry in the weldment.

  16. Estimation of the input parameters in the Feller neuronal model

    NASA Astrophysics Data System (ADS)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  17. Field measurements and neural network modeling of water quality parameters

    NASA Astrophysics Data System (ADS)

    Qishlaqi, Afishin; Kordian, Sediqeh; Parsaie, Abbas

    2017-01-01

    Rivers are one of the main resources for water supplying the agricultural, industrial, and urban use; therefore, unremitting surveying the quality of them is necessary. Recently, artificial neural networks have been proposed as a powerful tool for modeling and predicting the water quality parameters in natural streams. In this paper, to predict water quality parameters of Tireh River located at South West of Iran, a multilayer neural network model (MLP) was developed. The T.D.S, Ec, pH, HCO3, Cl, Na, So4, Mg, and Ca as main parameters of water quality parameters were measured and predicted using the MLP model. The architecture of the proposed MLP model included two hidden layers that at the first and second hidden layers, eight and six neurons were considered. The tangent sigmoid and pure-line functions were selected as transfer function for the neurons in hidden and output layers, respectively. The results showed that the MLP model has suitable performance to predict water quality parameters of Tireh River. For assessing the performance of the MLP model in the water quality prediction along the studied area, in addition to existing sampling stations, another 14 stations along were considered by authors. Evaluating the performance of developed MLP model to map relation between the water quality parameters along the studied area showed that it has suitable accuracy and minimum correlation between the results of MLP model and measured data was 0.85.

  18. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  19. “A model of mother-child Adjustment in Arab Muslim Immigrants to the US”

    PubMed Central

    Hough, Edythe s; Templin, Thomas N; Kulwicki, Anahid; Ramaswamy, Vidya; Katz, Anne

    2009-01-01

    We examined the mother-child adjustment and child behavior problems in Arab Muslim immigrant families residing in the U.S.A. The sample of 635 mother-child dyads was comprised of mothers who emigrated from 1989 or later and had at least one early adolescent child between the ages of 11 to 15 years old who was also willing to participate. Arabic speaking research assistants collected the data from the mothers and children using established measures of maternal and child stressors, coping, and social support; maternal distress; parent-child relationship; and child behavior problems. A structural equation model (SEM) was specified a priori with 17 predicted pathways. With a few exceptions, the final SEM model was highly consistent with the proposed model and had a good fit to the data. The model accounted for 67% of the variance in child behavior problems. Child stressors, mother-child relationship, and maternal stressors were the causal variables that contributed the most to child behavior problems. The model also accounted for 27% of the variance in mother-child relationship. Child active coping, child gender, mother’s education, and maternal distress were all predictive of the mother-child relationship. Mother-child relationship also mediated the effects of maternal distress and child active coping on child behavior problems. These findings indicate that immigrant mothers contribute greatly to adolescent adjustment, both as a source of risk and protection. These findings also suggest that intervening with immigrant mothers to reduce their stress and strengthening the parent-child relationship are two important areas for promoting adolescent adjustment. PMID:19758737

  20. A spatial model of bird abundance as adjusted for detection probability

    USGS Publications Warehouse

    Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.

    2009-01-01

    Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.

  1. Estimating winter wheat phenological parameters: Implications for crop modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  2. Computation of physiological human vocal fold parameters by mathematical optimization of a biomechanical model

    PubMed Central

    Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael

    2011-01-01

    With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808

  3. Retrospective forecast of ETAS model with daily parameters estimate

    NASA Astrophysics Data System (ADS)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  4. Analysis of the Second Model Parameter Estimation Experiment Workshop Results

    NASA Astrophysics Data System (ADS)

    Duan, Q.; Schaake, J.; Koren, V.; Mitchell, K.; Lohmann, D.

    2002-05-01

    The goal of Model Parameter Estimation Experiment (MOPEX) is to investigate techniques for the a priori parameter estimation for land surface parameterization schemes of atmospheric models and for hydrologic models. A comprehensive database has been developed which contains historical hydrometeorologic time series data and land surface characteristics data for 435 basins in the United States and many international basins. A number of international MOPEX workshops have been convened or planned for MOPEX participants to share their parameter estimation experience. The Second International MOPEX Workshop is held in Tucson, Arizona, April 8-10, 2002. This paper presents the MOPEX goal/objectives and science strategy. Results from our participation in developing and testing of the a priori parameter estimation procedures for the National Weather Service (NWS) Sacramento Soil Moisture Accounting (SAC-SMA) model, the Simple Water Balance (SWB) model, and the National Center for Environmental Prediction Center (NCEP) NOAH Land Surface Model (NOAH LSM) are highlighted. The test results will include model simulations using both a priori parameters and calibrated parameters for 12 basins selected for the Tucson MOPEX Workshop.

  5. Effect of Noise in the Three-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    In a preceding research report, ONR/RR-82-1 (Information Loss Caused by Noise in Models for Dichotomous Items), observations were made on the effect of noise accommodated in different types of models on the dichotomous response level. In the present paper, focus is put upon the three-parameter logistic model, which is widely used among…

  6. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  7. A Low-cost, Off-the-Shelf Ready Field Programmable Gate Array diode Laser Controller With adjustable parameters

    NASA Astrophysics Data System (ADS)

    Yang, Ge; Barry, John. F.; Shuman, Edward; Demille, David

    2010-03-01

    We have constructed a field programmable gate array (FPGA) based lock-in amplifier/PID servo controller for use in laser frequency locking and other applications. Our system is constructed from a commercial FPGA evaluation board with total cost less than 400 and no additional electronic component is required. FPGA technology allows us to implement parallel real-time signal processing with great flexibility. Internal parameters such as the modulation frequency, phase delay, gains and filter time constants, etc. can be changed on the fly within a very wide dynamic range through an iPod-like interface. This system was used to lock a tunable diode laser to an external Fabry Perot cavity with piezo and current feedback. A loop bandwidth of 200 kHz was achieved, limited only by the slow ADCs available on the FPGA board. Further improvements in both hardware and software seem possible, and will be discussed.

  8. Lumped Parameter Model (LPM) for Light-Duty Vehicles

    EPA Pesticide Factsheets

    EPA’s Lumped Parameter Model (LPM) is a free, desktop computer application that estimates the effectiveness (CO2 Reduction) of various technology combinations or “packages,” in a manner that accounts for synergies between technologies.

  9. Online parameter estimation for surgical needle steering model.

    PubMed

    Yan, Kai Guo; Podder, Tarun; Xiao, Di; Yu, Yan; Liu, Tien-I; Ling, Keck Voon; Ng, Wan Sing

    2006-01-01

    Estimation of the system parameters, given noisy input/output data, is a major field in control and signal processing. Many different estimation methods have been proposed in recent years. Among various methods, Extended Kalman Filtering (EKF) is very useful for estimating the parameters of a nonlinear and time-varying system. Moreover, it can remove the effects of noises to achieve significantly improved results. Our task here is to estimate the coefficients in a spring-beam-damper needle steering model. This kind of spring-damper model has been adopted by many researchers in studying the tissue deformation. One difficulty in using such model is to estimate the spring and damper coefficients. Here, we proposed an online parameter estimator using EKF to solve this problem. The detailed design is presented in this paper. Computer simulations and physical experiments have revealed that the simulator can estimate the parameters accurately with fast convergent speed and improve the model efficacy.

  10. Uncertainty in dual permeability model parameters for structured soils

    NASA Astrophysics Data System (ADS)

    Arora, B.; Mohanty, B. P.; McGuire, J. T.

    2012-01-01

    Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa) and macropore tortuosity (lf) but also of other parameters of the matrix and macropore domains.

  11. Dynamically adjustable foot-ground contact model to estimate ground reaction force during walking and running.

    PubMed

    Jung, Yihwan; Jung, Moonki; Ryu, Jiseon; Yoon, Sukhoon; Park, Sang-Kyoon; Koo, Seungbum

    2016-03-01

    Human dynamic models have been used to estimate joint kinetics during various activities. Kinetics estimation is in demand in sports and clinical applications where data on external forces, such as the ground reaction force (GRF), are not available. The purpose of this study was to estimate the GRF during gait by utilizing distance- and velocity-dependent force models between the foot and ground in an inverse-dynamics-based optimization. Ten males were tested as they walked at four different speeds on a force plate-embedded treadmill system. The full-GRF model whose foot-ground reaction elements were dynamically adjusted according to vertical displacement and anterior-posterior speed between the foot and ground was implemented in a full-body skeletal model. The model estimated the vertical and shear forces of the GRF from body kinematics. The shear-GRF model with dynamically adjustable shear reaction elements according to the input vertical force was also implemented in the foot of a full-body skeletal model. Shear forces of the GRF were estimated from body kinematics, vertical GRF, and center of pressure. The estimated full GRF had the lowest root mean square (RMS) errors at the slow walking speed (1.0m/s) with 4.2, 1.3, and 5.7% BW for anterior-posterior, medial-lateral, and vertical forces, respectively. The estimated shear forces were not significantly different between the full-GRF and shear-GRF models, but the RMS errors of the estimated knee joint kinetics were significantly lower for the shear-GRF model. Providing COP and vertical GRF with sensors, such as an insole-type pressure mat, can help estimate shear forces of the GRF and increase accuracy for estimation of joint kinetics.

  12. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  13. Optimal parameter and uncertainty estimation of a land surface model: Sensitivity to parameter ranges and model complexities

    NASA Astrophysics Data System (ADS)

    Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  14. Agricultural and Environmental Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  15. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    SciTech Connect

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  16. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  17. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  18. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  19. An Effective Parameter Screening Strategy for High Dimensional Watershed Models

    NASA Astrophysics Data System (ADS)

    Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.

    2014-12-01

    Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.

  20. Parameter Transferability Across Spatial and Temporal Resolutions in Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Melsen, L. A.; Teuling, R.; Torfs, P. J.; Zappa, M.; Mizukami, N.; Clark, M. P.; Uijlenhoet, R.

    2015-12-01

    Improvements in computational power and data availability provided new opportunities for hydrological modeling. The increased complexity of hydrological models, however, also leads to time consuming optimization procedures. Moreover, observations are still required to calibrate the model. Both to decrease calculation time of the optimization and to be able to apply the model in poorly gauged basins, many studies have focused on transferability of parameters. We adopted a probabilistic approach to systematically investigate parameter transferability across both temporal and spatial resolution. A Variable Infiltration Capacity model for the Thur basin (1703km2, Switzerland) was set-up and run at four different spatial resolutions (1x1, 5x5, 10x10km, lumped) and three different temporal resolutions (hourly, daily, monthly). Three objective functions were used to evaluate the model: Kling-Gupta Efficiency (KGE(Q)), Nash-Sutcliffe Efficiency (NSE(Q)) and NSE(logQ). We used a Hierarchical Latin Hypercube Sample (Vorechovsky, 2014) to efficiently sample the most sensitive parameters. The model was run 3150 times and the best 1% of the runs was selected as behavioral. The overlap in selected behavioral sets for different spatial and temporal resolutions was used as indicators for parameter transferability. There was a large overlap in selected sets for the different spatial resolutions, implying that parameters were to a large extent transferable across spatial resolutions. The temporal resolution, however, had a larger impact on the parameters; it significantly affected the parameter distributions for at least four out of seven parameters. The parameter values for the monthly time step were found to be substantially different from those for daily and hourly time steps. This suggests that the output from models which are calibrated on a monthly time step, cannot be interpreted or analysed on an hourly or daily time step. It was also shown that the selected objective

  1. Resolution of a Rank-Deficient Adjustment Model Via an Isomorphic Geometrical Setup with Tensor Structure.

    DTIC Science & Technology

    1987-03-01

    AFGL,-TR-87-0102 4. TITLE (ad Subtile) S . TYPE Of REPORT & ERIOD COVERED RESOLUTION OF A RANK-DEFICIENT ADJUSTMENT MODEL Final Report. VIA AN...transformation of multiple integrals. i IVnc’lass it i cd e- S CURITY C1 AIrIC ATIOIN O f THIS PAG P .𔃻 ’ FnI.f* d) TABLE OF CONTENTS Page ABSTRACT i...associated metric tensor is then given as g = krIs +jrs r ... + s + while the necessary associated metric tensor is -4 grs =ars jr s ’g = +jj 4 .... where

  2. Improved input parameters for diffusion models of skin absorption.

    PubMed

    Hansen, Steffi; Lehr, Claus-Michael; Schaefer, Ulrich F

    2013-02-01

    To use a diffusion model for predicting skin absorption requires accurate estimates of input parameters on model geometry, affinity and transport characteristics. This review summarizes methods to obtain input parameters for diffusion models of skin absorption focusing on partition and diffusion coefficients. These include experimental methods, extrapolation approaches, and correlations that relate partition and diffusion coefficients to tabulated physico-chemical solute properties. Exhaustive databases on lipid-water and corneocyte protein-water partition coefficients are presented and analyzed to provide improved approximations to estimate lipid-water and corneocyte protein-water partition coefficients. The most commonly used estimates of lipid and corneocyte diffusion coefficients are also reviewed. In order to improve modeling of skin absorption in the future diffusion models should include the vertical stratum corneum heterogeneity, slow equilibration processes, the absorption from complex non-aqueous formulations, and an improved representation of dermal absorption processes. This will require input parameters for which no suitable estimates are yet available.

  3. A six-parameter Iwan model and its application

    NASA Astrophysics Data System (ADS)

    Li, Yikun; Hao, Zhiming

    2016-02-01

    Iwan model is a practical tool to describe the constitutive behaviors of joints. In this paper, a six-parameter Iwan model based on a truncated power-law distribution with two Dirac delta functions is proposed, which gives a more comprehensive description of joints than the previous Iwan models. Its analytical expressions including backbone curve, unloading curves and energy dissipation are deduced. Parameter identification procedures and the discretization method are also provided. A model application based on Segalman et al.'s experiment works with bolted joints is carried out. Simulation effects of different numbers of Jenkins elements are discussed. The results indicate that the six-parameter Iwan model can be used to accurately reproduce the experimental phenomena of joints.

  4. Behaviour of the cosmological model with variable deceleration parameter

    NASA Astrophysics Data System (ADS)

    Tiwari, R. K.; Beesham, A.; Shukla, B. K.

    2016-12-01

    We consider the Bianchi type-VI0 massive string universe with decaying cosmological constant Λ. To solve Einstein's field equations, we assume that the shear scalar is proportional to the expansion scalar and that the deceleration parameter q is a linear function of the Hubble parameter H, i.e., q=α +β H, which yields the scale factor a = e^{1/β√{2β t+k1}}. The model expands exponentially with cosmic time t. The value of the cosmological constant Λ is small and positive. Also, we discuss physical parameters as well as the jerk parameter j, which predict that the universe in this model originates as in the Λ CDM model.

  5. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study

    PubMed Central

    Girling, Alan J; Hofer, Timothy P; Wu, Jianhua; Chilton, Peter J; Nicholl, Jonathan P; Mohammed, Mohammed A; Lilford, Richard J

    2012-01-01

    Risk-adjustment schemes are used to monitor hospital performance, on the assumption that excess mortality not explained by case mix is largely attributable to suboptimal care. We have developed a model to estimate the proportion of the variation in standardised mortality ratios (SMRs) that can be accounted for by variation in preventable mortality. The model was populated with values from the literature to estimate a predictive value of the SMR in this context—specifically the proportion of those hospitals with SMRs among the highest 2.5% that fall among the worst 2.5% for preventable mortality. The extent to which SMRs reflect preventable mortality rates is highly sensitive to the proportion of deaths that are preventable. If 6% of hospital deaths are preventable (as suggested by the literature), the predictive value of the SMR can be no greater than 9%. This value could rise to 30%, if 15% of deaths are preventable. The model offers a ‘reality check’ for case mix adjustment schemes designed to isolate the preventable component of any outcome rate. PMID:23069860

  6. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 19 94

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Jacobs, C. S.

    1994-01-01

    This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.

  7. Resolving model parameter values from carbon and nitrogen stock measurements in a wide range of tropical mature forests using nonlinear inversion and regression trees

    USGS Publications Warehouse

    Liu, S.; Anderson, P.; Zhou, G.; Kauffman, B.; Hughes, F.; Schimel, D.; Watson, Vicente; Tosi, Joseph

    2008-01-01

    Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in seven life zones in Costa Rica. Net primary productivity from the Moderate-Resolution Imaging Spectroradiometer (MODIS), C and N stocks in aboveground live biomass, litter, coarse woody debris (CWD), and in soils were used to calibrate the model. To investigate the resolution of available observations on the number of adjustable parameters, inversion was performed using nine setups of adjustable parameters. Statistics including observation sensitivity, parameter correlation coefficient, parameter sensitivity, and parameter confidence limits were used to evaluate the information content of observations, resolution of model parameters, and overall model performance. Results indicated that soil organic carbon content, soil nitrogen content, and total aboveground biomass carbon had the highest information contents, while measurements of carbon in litter and nitrogen in CWD contributed little to the parameter estimation processes. The available information could resolve the values of 2-4 parameters. Adjusting just one parameter resulted in under-fitting and unacceptable model performance, while adjusting five parameters simultaneously led to over-fitting. Results further indicated that the MODIS NPP values were compressed as compared with the spatial variability of net primary production (NPP) values inferred from inverse modeling. Using inverse modeling to infer NPP and other sensitive model parameters from C and N stock observations provides an opportunity to utilize data collected by national to regional forest inventory systems to reduce the uncertainties in the carbon cycle and generate valuable

  8. Validation, replication, and sensitivity testing of Heckman-type selection models to adjust estimates of HIV prevalence.

    PubMed

    Clark, Samuel J; Houle, Brian

    2014-01-01

    A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS) found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys.

  9. Parameters of cosmological models and recent astronomical observations

    SciTech Connect

    Sharov, G.S.; Vorontsova, E.G. E-mail: elenavor@inbox.ru

    2014-10-01

    For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ΛCDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ΛCDM model 1σ estimates of parameters are: H{sub 0}=70.262±0.319 km {sup -1}Mp {sup -1}, Ω{sub m}=0.276{sub -0.008}{sup +0.009}, Ω{sub Λ}=0.769±0.029, Ω{sub k}=-0.045±0.032. The GCG model under restriction 0α≥ is reduced to the ΛCDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z≥2.3.

  10. Parameters of cosmological models and recent astronomical observations

    NASA Astrophysics Data System (ADS)

    Sharov, G. S.; Vorontsova, E. G.

    2014-10-01

    For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ΛCDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ΛCDM model 1σ estimates of parameters are: H0=70.262±0.319 km -1Mp -1, Ωm=0.276-0.008+0.009, ΩΛ=0.769±0.029, Ωk=-0.045±0.032. The GCG model under restriction 0α>= is reduced to the ΛCDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z>=2.3.

  11. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).

  12. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  13. Parameter uncertainty analysis of a biokinetic model of caesium

    SciTech Connect

    Li, W. B.; Klein, W.; Blanchardon, Eric; Puncher, M; Leggett, Richard Wayne; Oeh, U.; Breustedt, B.; Nosske, Dietmar; Lopez, M.

    2014-04-17

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.

  14. Parameter uncertainty analysis of a biokinetic model of caesium.

    PubMed

    Li, W B; Klein, W; Blanchardon, E; Puncher, M; Leggett, R W; Oeh, U; Breustedt, B; Noßke, D; Lopez, M A

    2015-01-01

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.

  15. Parameter uncertainty analysis of a biokinetic model of caesium

    DOE PAGES

    Li, W. B.; Klein, W.; Blanchardon, Eric; ...

    2014-04-17

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects atmore » different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.« less

  16. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  17. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Fanselow, J. L.

    1987-01-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  18. Identification of Neurofuzzy models using GTLS parameter estimation.

    PubMed

    Jakubek, Stefan; Hametner, Christoph

    2009-10-01

    In this paper, nonlinear system identification utilizing generalized total least squares (GTLS) methodologies in neurofuzzy systems is addressed. The problem involved with the estimation of the local model parameters of neurofuzzy networks is the presence of noise in measured data. When some or all input channels are subject to noise, the GTLS algorithm yields consistent parameter estimates. In addition to the estimation of the parameters, the main challenge in the design of these local model networks is the determination of the region of validity for the local models. The method presented in this paper is based on an expectation-maximization algorithm that uses a residual from the GTLS parameter estimation for proper partitioning. The performance of the resulting nonlinear model with local parameters estimated by weighted GTLS is a product both of the parameter estimation itself and the associated residual used for the partitioning process. The applicability and benefits of the proposed algorithm are demonstrated by means of illustrative examples and an automotive application.

  19. Hubble Expansion Parameter in a New Model of Dark Energy

    NASA Astrophysics Data System (ADS)

    Saadat, Hassan

    2012-01-01

    In this study, we consider new model of dark energy based on Taylor expansion of its density and calculate the Hubble expansion parameter for various parameterizations of equation of state. This model is useful to probe a possible evolving of dark energy component in comparison with current observational data.

  20. Separability of Item and Person Parameters in Response Time Models.

    ERIC Educational Resources Information Center

    Van Breukelen, Gerard J. P.

    1997-01-01

    Discusses two forms of separability of item and person parameters in the context of response time models. The first is "separate sufficiency," and the second is "ranking independence." For each form a theorem stating sufficient conditions is proved. The two forms are shown to include several cases of models from psychometric…

  1. Multiple Model Parameter Adaptive Control for In-Flight Simulation.

    DTIC Science & Technology

    1988-03-01

    dynamics of an aircraft. The plant is control- lable by a proportional-plus-integral ( PI ) control law. This section describes two methods of calculating...adaptive model-following PI control law [20-24]. The control law bases its control gains upon the parameters of a linear difference equation model which

  2. Dynamic Factor Analysis Models with Time-Varying Parameters

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian

    2011-01-01

    Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor…

  3. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques

  4. Is it getting hot in here? Adjustment of hydraulic parameters in six boreal and temperate tree species after 5 years of warming.

    PubMed

    McCulloh, Katherine A; Petitmermet, Joshua; Stefanski, Artur; Rice, Karen E; Rich, Roy L; Montgomery, Rebecca A; Reich, Peter B

    2016-12-01

    Global temperatures (T) are rising, and for many plant species, their physiological response to this change has not been well characterized. In particular, how hydraulic parameters may change has only been examined experimentally for a few species. To address this, we measured characteristics of the hydraulic architecture of six species growing in ambient T and ambient +3.4 °C T plots in two experimentally warmed forest sites in Minnesota. These sites are at the temperate-boreal ecotone, and we measured three species from each forest type. We hypothesized that relative to boreal species, temperate species near their northern range border would increase xylem conduit diameters when grown under elevated T. We also predicted a continuum of responses among wood types, with conduit diameter increases correlating with increases in the complexity of wood structure. Finally, we predicted that increases in conduit diameter and specific hydraulic conductivity would positively affect photosynthetic rates and growth. Our results generally supported our hypotheses, and conduit diameter increased under elevated T across all species, although this pattern was driven predominantly by three species. Two of these species were temperate angiosperms, but one was a boreal conifer, contrary to predictions. We observed positive relationships between the change in specific hydraulic conductivity and both photosynthetic rate (P = 0.080) and growth (P = 0.012). Our results indicate that species differ in their ability to adjust hydraulically to increases in T. Specifically, species with more complex xylem anatomy, particularly those individuals growing near the cooler edge of their range, appeared to be better able to increase conduit diameters and specific hydraulic conductivity, which permitted increases in photosynthesis and growth. Our data support results that indicate individual's ability to physiologically adjust is related to their location within their species range, and

  5. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    NASA Astrophysics Data System (ADS)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  6. Bayesian methods for characterizing unknown parameters of material models

    DOE PAGES

    Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.

    2016-02-04

    A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less

  7. Bayesian methods for characterizing unknown parameters of material models

    SciTech Connect

    Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.

    2016-02-04

    A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed to characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.

  8. Soil-related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2003-07-02

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  9. Microscopic calculation of interacting boson model parameters by potential-energy surface mapping

    SciTech Connect

    Bentley, I.; Frauendorf, S.

    2011-06-15

    A coherent state technique is used to generate an interacting boson model (IBM) Hamiltonian energy surface which is adjusted to match a mean-field energy surface. This technique allows the calculation of IBM Hamiltonian parameters, prediction of properties of low-lying collective states, as well as the generation of probability distributions of various shapes in the ground state of transitional nuclei, the last two of which are of astrophysical interest. The results for krypton, molybdenum, palladium, cadmium, gadolinium, dysprosium, and erbium nuclei are compared with experiment.

  10. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  11. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    SciTech Connect

    Hansen, Clifford

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  12. The parameter landscape of a mammalian circadian clock model

    NASA Astrophysics Data System (ADS)

    Jolley, Craig; Ueda, Hiroki

    2013-03-01

    In mammals, an intricate system of feedback loops enables autonomous, robust oscillations synchronized with the daily light/dark cycle. Based on recent experimental evidence, we have developed a simplified dynamical model and parameterized it by compiling experimental data on the amplitude, phase, and average baseline of clock gene oscillations. Rather than identifying a single ``optimal'' parameter set, we used Monte Carlo sampling to explore the fitting landscape. The resulting ensemble of model parameter sets is highly anisotropic, with very large variances along some (non-trivial) linear combinations of parameters and very small variances along others. This suggests that our model exhibits ``sloppy'' features that have previously been identified in various multi-parameter fitting problems. We will discuss the implications of this model fitting behavior for the reliability of both individual parameter estimates and systems-level predictions of oscillator characteristics, as well as the impact of experimental constraints. The results of this study are likely to be important both for improved understanding of the mammalian circadian oscillator and as a test case for more general questions about the features of systems biology models.

  13. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  14. Evaluation of Personnel Parameters in Software Cost Estimating Models

    DTIC Science & Technology

    2007-11-02

    ACAP , 1.42; all other parameters would be set to the nominal value of one. The effort multiplier will be a fixed value if the model uses linear...data. The calculated multiplier values were the 45 Table 8. COSTAR Trials For Multiplier Calculation Run ACAP PCAP PCON APEX PLEX LTEX Effort...impact. Table 9. COCOMO II Personnel Parameters Effort Multipliers Driver Lowest Nominal Highest Analyst Capability ( ACAP ) 1.42 1.00 0.71

  15. Application of physical parameter identification to finite-element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1987-01-01

    The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.

  16. Parameter identifiability and estimation of HIV/AIDS dynamic models.

    PubMed

    Wu, Hulin; Zhu, Haihong; Miao, Hongyu; Perelson, Alan S

    2008-04-01

    We use a technique from engineering (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005) to investigate the algebraic identifiability of a popular three-dimensional HIV/AIDS dynamic model containing six unknown parameters. We find that not all six parameters in the model can be identified if only the viral load is measured, instead only four parameters and the product of two parameters (N and lambda) are identifiable. We introduce the concepts of an identification function and an identification equation and propose the multiple time point (MTP) method to form the identification function which is an alternative to the previously developed higher-order derivative (HOD) method (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005). We show that the newly proposed MTP method has advantages over the HOD method in the practical implementation. We also discuss the effect of the initial values of state variables on the identifiability of unknown parameters. We conclude that the initial values of output (observable) variables are part of the data that can be used to estimate the unknown parameters, but the identifiability of unknown parameters is not affected by these initial values if the exact initial values are measured with error. These noisy initial values only increase the estimation error of the unknown parameters. However, having the initial values of the latent (unobservable) state variables exactly known may help to identify more parameters. In order to validate the identifiability results, simulation studies are performed to estimate the unknown parameters and initial values from simulated noisy data. We also apply the proposed methods to a clinical data set

  17. Parameter fitting for piano sound synthesis by physical modeling

    NASA Astrophysics Data System (ADS)

    Bensa, Julien; Gipouloux, Olivier; Kronland-Martinet, Richard

    2005-07-01

    A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer-string model. In fact, these parameters are hard to estimate from static measurements, causing the synthesis sounds to be unrealistic. An original approach that estimates the parameters of a piano model, from the measurement of the string vibration, by minimizing a perceptual criterion is proposed. The minimization process that was used is a combination of a gradient method and a simulated annealing algorithm, in order to avoid convergence problems in case of multiple local minima. The criterion, based on the tristimulus concept, takes into account the spectral energy density in three bands, each allowing particular parameters to be estimated. The optimization process has been run on signals measured on an experimental setup. The parameters thus estimated provided a better sound quality than the one obtained using a global energetic criterion. Both the sound's attack and its brightness were better preserved. This quality gain was obtained for parameter values very close to the initial ones, showing that only slight deviations are necessary to make synthetic sounds closer to the real ones.

  18. Control of the SCOLE configuration using distributed parameter models

    NASA Astrophysics Data System (ADS)

    Hsiao, Min-Hung; Huang, Jen-Kuang

    1994-06-01

    A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.

  19. Lower-order effects adjustment in quantitative traits model-based multifactor dimensionality reduction.

    PubMed

    Mahachie John, Jestinah M; Cattaert, Tom; Lishout, François Van; Gusareva, Elena S; Steen, Kristel Van

    2012-01-01

    Identifying gene-gene interactions or gene-environment interactions in studies of human complex diseases remains a big challenge in genetic epidemiology. An additional challenge, often forgotten, is to account for important lower-order genetic effects. These may hamper the identification of genuine epistasis. If lower-order genetic effects contribute to the genetic variance of a trait, identified statistical interactions may simply be due to a signal boost of these effects. In this study, we restrict attention to quantitative traits and bi-allelic SNPs as genetic markers. Moreover, our interaction study focuses on 2-way SNP-SNP interactions. Via simulations, we assess the performance of different corrective measures for lower-order genetic effects in Model-Based Multifactor Dimensionality Reduction epistasis detection, using additive and co-dominant coding schemes. Performance is evaluated in terms of power and familywise error rate. Our simulations indicate that empirical power estimates are reduced with correction of lower-order effects, likewise familywise error rates. Easy-to-use automatic SNP selection procedures, SNP selection based on "top" findings, or SNP selection based on p-value criterion for interesting main effects result in reduced power but also almost zero false positive rates. Always accounting for main effects in the SNP-SNP pair under investigation during Model-Based Multifactor Dimensionality Reduction analysis adequately controls false positive epistasis findings. This is particularly true when adopting a co-dominant corrective coding scheme. In conclusion, automatic search procedures to identify lower-order effects to correct for during epistasis screening should be avoided. The same is true for procedures that adjust for lower-order effects prior to Model-Based Multifactor Dimensionality Reduction and involve using residuals as the new trait. We advocate using "on-the-fly" lower-order effects adjusting when screening for SNP-SNP interactions

  20. QCD-inspired determination of NJL model parameters

    NASA Astrophysics Data System (ADS)

    Springer, Paul; Braun, Jens; Rechenberger, Stefan; Rennecke, Fabian

    2017-03-01

    The QCD phase diagram at finite temperature and density has attracted considerable interest over many decades now, not least because of its relevance for a better understanding of heavy-ion collision experiments. Models provide some insight into the QCD phase structure but usually rely on various parameters. Based on renormalization group arguments, we discuss how the parameters of QCD low-energy models can be determined from the fundamental theory of the strong interaction. We particularly focus on a determination of the temperature dependence of these parameters in this work and comment on the effect of a finite quark chemical potential. We present first results and argue that our findings can be used to improve the predictive power of future model calculations.

  1. Utilizing Soize's Approach to Identify Parameter and Model Uncertainties

    SciTech Connect

    Bonney, Matthew S.; Brake, Matthew Robert

    2014-10-01

    Quantifying uncertainty in model parameters is a challenging task for analysts. Soize has derived a method that is able to characterize both model and parameter uncertainty independently. This method is explained with the assumption that some experimental data is available, and is divided into seven steps. Monte Carlo analyses are performed to select the optimal dispersion variable to match the experimental data. Along with the nominal approach, an alternative distribution can be used along with corrections that can be utilized to expand the scope of this method. This method is one of a very few methods that can quantify uncertainty in the model form independently of the input parameters. Two examples are provided to illustrate the methodology, and example code is provided in the Appendix.

  2. Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.

  3. SU-E-T-247: Multi-Leaf Collimator Model Adjustments Improve Small Field Dosimetry in VMAT Plans

    SciTech Connect

    Young, L; Yang, F

    2014-06-01

    Purpose: The Elekta beam modulator linac employs a 4-mm micro multileaf collimator (MLC) backed by a fixed jaw. Out-of-field dose discrepancies between treatment planning system (TPS) calculations and output water phantom measurements are caused by the 1-mm leaf gap required for all moving MLCs in a VMAT arc. In this study, MLC parameters are optimized to improve TPS out-of-field dose approximations. Methods: Static 2.4 cm square fields were created with a 1-mm leaf gap for MLCs that would normally park behind the jaw. Doses in the open field and leaf gap were measured with an A16 micro ion chamber and EDR2 film for comparison with corresponding point doses in the Pinnacle TPS. The MLC offset table and tip radius were adjusted until TPS point doses agreed with photon measurements. Improvements to the beam models were tested using static arcs consisting of square fields ranging from 1.6 to 14.0 cm, with 45° collimator rotation, and 1-mm leaf gap to replicate VMAT conditions. Gamma values for the 3-mm distance, 3% dose difference criteria were evaluated using standard QA procedures with a cylindrical detector array. Results: The best agreement in point doses within the leaf gap and open field was achieved by offsetting the default rounded leaf end table by 0.1 cm and adjusting the leaf tip radius to 13 cm. Improvements in TPS models for 6 and 10 MV photon beams were more significant for smaller field sizes 3.6 cm or less where the initial gamma factors progressively increased as field size decreased, i.e. for a 1.6cm field size, the Gamma increased from 56.1% to 98.8%. Conclusion: The MLC optimization techniques developed will achieve greater dosimetric accuracy in small field VMAT treatment plans for fixed jaw linear accelerators. Accurate predictions of dose to organs at risk may reduce adverse effects of radiotherapy.

  4. A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon

    2016-05-01

    We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.

  5. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  6. Dynamic Factor Analysis Models With Time-Varying Parameters.

    PubMed

    Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian

    2011-04-11

    Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor model with vector autoregressive relations and time-varying cross-regression parameters at the factor level. Using techniques drawn from the state-space literature, the model was fitted to a set of daily affect data (over 71 days) from 10 participants who had been diagnosed with Parkinson's disease. Our empirical results lend partial support and some potential refinement to the Dynamic Model of Activation with regard to how the time dependencies between positive and negative affects change over time. A simulation study is conducted to examine the performance of the proposed techniques when (a) changes in the time-varying parameters are represented using the true model of change, (b) supposedly time-invariant parameters are represented as time-varying, and

  7. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    NASA Astrophysics Data System (ADS)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  8. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  9. [Parameter uncertainty analysis for urban rainfall runoff modelling].

    PubMed

    Huang, Jin-Liang; Lin, Jie; Du, Peng-Fei

    2012-07-01

    An urban watershed in Xiamen was selected to perform the parameter uncertainty analysis for urban stormwater runoff modeling in terms of identification and sensitivity analysis based on storm water management model (SWMM) using Monte-Carlo sampling and regionalized sensitivity analysis (RSA) algorithm. Results show that Dstore-Imperv, Dstore-Perv and Curve Number (CN) are the identifiable parameters with larger K-S values in hydrological and hydraulic module, and the rank of K-S values in hydrological and hydraulic module is Dstore-Imperv > CN > Dstore-Perv > N-Perv > conductivity > Con-Mann > N-Imperv. With regards to water quality module, the parameters in exponent washoff model including Coefficient and Exponent and the Max. Buildup parameter of saturation buildup model in three land cover types are the identifiable parameters with the larger K-S values. In comparison, the K-S value of rate constant in three landuse/cover types is smaller than that of Max. Buildup, Coefficient and Exponent.

  10. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  11. Estimation of dynamic stability parameters from drop model flight tests

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Iliff, K. W.

    1981-01-01

    A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.

  12. Race and Gender Influences on Adjustment in Early Adolescence: Investigation of an Integrative Model.

    ERIC Educational Resources Information Center

    DuBois, David L.; Burk-Braxton, Carol; Swenson, Lance P.; Tevendale, Heather D.; Hardesty, Jennifer L.

    2002-01-01

    Investigated the influence of racial and gender discrimination and difficulties on adolescent adjustment. Found that discrimination and hassles contribute to a general stress context which in turn influences emotional and behavioral problems in adjustment, while racial and gender identity positively affect self-esteem and thus adjustment. Revealed…

  13. Estimation of the parameters of ETAS models by Simulated Annealing.

    PubMed

    Lombardi, Anna Maria

    2015-02-12

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  14. Estimation of the parameters of ETAS models by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Lombardi, Anna Maria

    2015-02-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  15. Estimation of the parameters of ETAS models by Simulated Annealing

    PubMed Central

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036

  16. Social Support and Psychological Adjustment Among Latinas With Arthritis: A Test of a Theoretical Model

    PubMed Central

    Abraído-Lanza, Ana F.

    2013-01-01

    Background Among people coping with chronic illness, tangible social support sometimes has unintended negative consequences on the recipient’s psychological health. Identity processes may help explain these effects. Individuals derive self-worth and a sense of competence by enacting social roles that are central to the self-concept. Purpose This study tested a model drawing from some of these theoretical propositions. The central hypothesis was that tangible support in fulfilling a highly valued role undermines self-esteem and a sense of self-efficacy, which, in turn, affect psychological adjustment Methods Structured interviews were conducted with 98 Latina women with arthritis who rated the homemaker identity as being of central importance to the self-concept. Results A path analysis indicated that, contrary to predictions, tangible housework support was related to less psychological distress. Emotional support predicted greater psychological well-being. These relationships were not mediated by self-esteem or self-efficacy. Qualitative data revealed that half of the sample expressed either ambivalent or negative feelings about receiving housework support Conclusions Results may reflect social and cultural norms concerning the types of support that are helpful and appropriate from specific support providers. Future research should consider the cultural meaning and normative context of the support transaction. This study contributes to scarce literatures on the mechanisms that mediate the relationship between social support and adjustment, as well as illness and psychosocial adaptation among Latina women with chronic illness. PMID:15184092

  17. Climate change decision-making: Model & parameter uncertainties explored

    SciTech Connect

    Dowlatabadi, H.; Kandlikar, M.; Linville, C.

    1995-12-31

    A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.

  18. Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.

    2014-12-01

    This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.

  19. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  20. Macroscopic singlet oxygen model incorporating photobleaching as an input parameter

    NASA Astrophysics Data System (ADS)

    Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.

    2015-03-01

    A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.

  1. Generating Effective Models and Parameters for RNA Genetic Circuits.

    PubMed

    Hu, Chelsea Y; Varner, Jeffrey D; Lucks, Julius B

    2015-08-21

    RNA genetic circuitry is emerging as a powerful tool to control gene expression. However, little work has been done to create a theoretical foundation for RNA circuit design. A prerequisite to this is a quantitative modeling framework that accurately describes the dynamics of RNA circuits. In this work, we develop an ordinary differential equation model of transcriptional RNA genetic circuitry, using an RNA cascade as a test case. We show that parameter sensitivity analysis can be used to design a set of four simple experiments that can be performed in parallel using rapid cell-free transcription-translation (TX-TL) reactions to determine the 13 parameters of the model. The resulting model accurately recapitulates the dynamic behavior of the cascade, and can be easily extended to predict the function of new cascade variants that utilize new elements with limited additional characterization experiments. Interestingly, we show that inconsistencies between model predictions and experiments led to the model-guided discovery of a previously unknown maturation step required for RNA regulator function. We also determine circuit parameters in two different batches of TX-TL, and show that batch-to-batch variation can be attributed to differences in parameters that are directly related to the concentrations of core gene expression machinery. We anticipate the RNA circuit models developed here will inform the creation of computer aided genetic circuit design tools that can incorporate the growing number of RNA regulators, and that the parametrization method will find use in determining functional parameters of a broad array of natural and synthetic regulatory systems.

  2. Important observations and parameters for a salt water intrusion model

    USGS Publications Warehouse

    Shoemaker, W.B.

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.

  3. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    PubMed

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  4. Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series

    NASA Astrophysics Data System (ADS)

    Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik

    2016-06-01

    Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model

  5. A data-driven model of present-day glacial isostatic adjustment in North America

    NASA Astrophysics Data System (ADS)

    Simon, Karen; Riva, Riccardo

    2016-04-01

    Geodetic measurements of gravity change and vertical land motion are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) via least-squares inversion. The result is an updated model of present-day GIA wherein the final predicted signal is informed by both observational data with realistic errors, and prior knowledge of GIA inferred from forward models. This method and other similar techniques have been implemented within a limited but growing number of GIA studies (e.g., Hill et al. 2010). The combination method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. Here, we show the results of using the combination approach to predict present-day rates of GIA in North America through the incorporation of both GPS-measured vertical land motion rates and GRACE-measured gravity observations into the prior model. In order to assess the influence of each dataset on the final GIA prediction, the vertical motion and gravimetry datasets are incorporated into the model first independently (i.e., one dataset only), then simultaneously. Because the a priori GIA model and its associated covariance are developed by averaging predictions from a suite of forward models that varies aspects of the Earth rheology and ice sheet history, the final GIA model is not independent of forward model predictions. However, we determine the sensitivity of the final model result to the prior GIA model information by using different representations of the input model covariance. We show that when both datasets are incorporated into the inversion, the final model adequately predicts available observational constraints, minimizes the uncertainty associated with the forward modelled GIA inputs, and includes a realistic estimation of the formal error associated with the GIA process. Along parts of the North American coastline, improved predictions of the long-term (kyr

  6. A Hamiltonian Model of Generator With AVR and PSS Parameters*

    NASA Astrophysics Data System (ADS)

    Qian, Jing.; Zeng, Yun.; Zhang, Lixiang.; Xu, Tianmao.

    Take the typical thyristor excitation system including the automatic voltage regulator (AVR) and the power system stabilizer (PSS) as an example, the supply rate of AVR and PSS branch are selected as the energy function of controller, and that is added to the Hamiltonian function of the generator to compose the total energy function. By proper transformation, the standard form of the Hamiltonian model of the generator including AVR and PSS is derived. The structure matrix and damping matrix of the model include feature parameters of AVR and PSS, which gives a foundation to study the interaction mechanism of parameters between AVR, PSS and the generator. Finally, the structural relationships and interactions of the system model are studied, the results show that the relationship of structure and damping characteristic reflected by model consistent with practical system.

  7. Prediction of interest rate using CKLS model with stochastic parameters

    SciTech Connect

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  8. The use of satellites in gravity field determination and model adjustment

    NASA Astrophysics Data System (ADS)

    Visser, Petrus Nicolaas Anna Maria

    1992-06-01

    Methods to improve gravity field models of the Earth with available data from satellite observations are proposed and discussed. In principle, all types of satellite observations mentioned give information of the satellite orbit perturbations and in conjunction the Earth's gravity field, because the satellite orbits are affected most by the Earth's gravity field. Therefore, two subjects are addressed: representation forms of the gravity field of the Earth and the theory of satellite orbit perturbations. An analytical orbit perturbation theory is presented and shown to be sufficiently accurate for describing satellite orbit perturbations if certain conditions are fulfilled. Gravity field adjustment experiments using the analytical orbit perturbation theory are discussed using real satellite observations. These observations consisted of Seasat laser range measurements and crossover differences, and of Geosat altimeter measurements and crossover differences. A look into the future, particularly relating to the ARISTOTELES (Applications and Research Involving Space Techniques for the Observation of the Earth's field from Low Earth Orbit Spacecraft) mission, is given.

  9. Assessing Parameter Identifiability in Phylogenetic Models Using Data Cloning

    PubMed Central

    Ponciano, José Miguel; Burleigh, J. Gordon; Braun, Edward L.; Taper, Mark L.

    2012-01-01

    The success of model-based methods in phylogenetics has motivated much research aimed at generating new, biologically informative models. This new computer-intensive approach to phylogenetics demands validation studies and sound measures of performance. To date there has been little practical guidance available as to when and why the parameters in a particular model can be identified reliably. Here, we illustrate how Data Cloning (DC), a recently developed methodology to compute the maximum likelihood estimates along with their asymptotic variance, can be used to diagnose structural parameter nonidentifiability (NI) and distinguish it from other parameter estimability problems, including when parameters are structurally identifiable, but are not estimable in a given data set (INE), and when parameters are identifiable, and estimable, but only weakly so (WE). The application of the DC theorem uses well-known and widely used Bayesian computational techniques. With the DC approach, practitioners can use Bayesian phylogenetics software to diagnose nonidentifiability. Theoreticians and practitioners alike now have a powerful, yet simple tool to detect nonidentifiability while investigating complex modeling scenarios, where getting closed-form expressions in a probabilistic study is complicated. Furthermore, here we also show how DC can be used as a tool to examine and eliminate the influence of the priors, in particular if the process of prior elicitation is not straightforward. Finally, when applied to phylogenetic inference, DC can be used to study at least two important statistical questions: assessing identifiability of discrete parameters, like the tree topology, and developing efficient sampling methods for computationally expensive posterior densities. PMID:22649181

  10. Constraint on Seesaw Model Parameters with Electroweak Vacuum Stability

    NASA Astrophysics Data System (ADS)

    Okane, H.; Morozumi, T.

    2017-03-01

    Within the standard model, the electroweak vacuum is metastable. We study how heavy right-handed neutrinos in seesaw model have impact on the stability through their loop effect for the Higgs potential. Requiring the lifetime of the electroweak vacuum is longer than the age of the Universe, the constraint on parameters such as their masses and the strength of the Yukawa couplings is obtained.

  11. Estimation of dynamic stability parameters from drop model flight tests

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Iliff, K. W.

    1981-01-01

    The overall remotely piloted drop model operation, descriptions, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods are discussed. Static and dynamic stability derivatives were obtained for an angle attack range from -20 deg to 53 deg. It is indicated that the variations of the estimates with angle of attack are consistent for most of the static derivatives, and the effects of configuration modifications to the model were apparent in the static derivative estimates.

  12. Parabolic problems with parameters arising in evolution model for phytromediation

    NASA Astrophysics Data System (ADS)

    Sahmurova, Aida; Shakhmurov, Veli

    2012-12-01

    The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).

  13. Modeling and simulation of HTS cables for scattering parameter analysis

    NASA Astrophysics Data System (ADS)

    Bang, Su Sik; Lee, Geon Seok; Kwon, Gu-Young; Lee, Yeong Ho; Chang, Seung Jin; Lee, Chun-Kwon; Sohn, Songho; Park, Kijun; Shin, Yong-June

    2016-11-01

    Most of modeling and simulation of high temperature superconducting (HTS) cables are inadequate for high frequency analysis since focus of the simulation's frequency is fundamental frequency of the power grid, which does not reflect transient characteristic. However, high frequency analysis is essential process to research the HTS cables transient for protection and diagnosis of the HTS cables. Thus, this paper proposes a new approach for modeling and simulation of HTS cables to derive the scattering parameter (S-parameter), an effective high frequency analysis, for transient wave propagation characteristics in high frequency range. The parameters sweeping method is used to validate the simulation results to the measured data given by a network analyzer (NA). This paper also presents the effects of the cable-to-NA connector in order to minimize the error between the simulated and the measured data under ambient and superconductive conditions. Based on the proposed modeling and simulation technique, S-parameters of long-distance HTS cables can be accurately derived in wide range of frequency. The results of proposed modeling and simulation can yield the characteristics of the HTS cables and will contribute to analyze the HTS cables.

  14. Left-right-symmetric model parameters: Updated bounds

    SciTech Connect

    Polak, J.; Zralek, M. )

    1992-11-01

    Using the available updated experimental data, including the last results from the CERN {ital e}{sup +}{ital e{minus}} collider LEP and improved parity-violation results, we find new constraints on the parameters in the left-right-symmetric model in the case of light right-handed neutrinos.

  15. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  16. Investigation of land use effects on Nash model parameters

    NASA Astrophysics Data System (ADS)

    Niazi, Faegheh; Fakheri Fard, Ahmad; Nourani, Vahid; Goodrich, David; Gupta, Hoshin

    2015-04-01

    Flood forecasting is of great importance in hydrologic planning, hydraulic structure design, water resources management and sustainable designs like flood control and management. Nash's instantaneous unit hydrograph is frequently used for simulating hydrological response in natural watersheds. Urban hydrology is gaining more attention due to population increases and associated construction escalation. Rapid development of urban areas affects the hydrologic processes of watersheds by decreasing soil permeability, flood base flow, lag time and increase in flood volume, peak runoff rates and flood frequency. In this study the influence of urbanization on the significant parameters of the Nash model have been investigated. These parameters were calculated using three popular methods (i.e. moment, root mean square error and random sampling data generation), in a small watershed consisting of one natural sub-watershed which drains into a residentially developed sub-watershed in the city of Sierra Vista, Arizona. The results indicated that for all three methods, the lag time, which is product of Nash parameters "K" and "n", in the natural sub-watershed is greater than the developed one. This logically implies more storage and/or attenuation in the natural sub-watershed. The median K and n parameters derived from the three methods using calibration events were tested via a set of verification events. The results indicated that all the three method have acceptable accuracy in hydrograph simulation. The CDF curves and histograms of the parameters clearly show the difference of the Nash parameter values between the natural and developed sub-watersheds. Some specific upper and lower percentile values of the median of the generated parameters (i.e. 10, 20 and 30 %) were analyzed to future investigates the derived parameters. The model was sensitive to variations in the value of the uncertain K and n parameter. Changes in n are smaller than K in both sub-watersheds indicating

  17. Positive Adjustment Among American Repatriated Prisoners of the Vietnam War: Modeling the Long-Term Effects of Captivity.

    PubMed

    King, Daniel W; King, Lynda A; Park, Crystal L; Lee, Lewina O; Kaiser, Anica Pless; Spiro, Avron; Moore, Jeffrey L; Kaloupek, Danny G; Keane, Terence M

    2015-11-01

    A longitudinal lifespan model of factors contributing to later-life positive adjustment was tested on 567 American repatriated prisoners from the Vietnam War. This model encompassed demographics at time of capture and attributes assessed after return to the U.S. (reports of torture and mental distress) and approximately 3 decades later (later-life stressors, perceived social support, positive appraisal of military experiences, and positive adjustment). Age and education at time of capture and physical torture were associated with repatriation mental distress, which directly predicted poorer adjustment 30 years later. Physical torture also had a salutary effect, enhancing later-life positive appraisals of military experiences. Later-life events were directly and indirectly (through concerns about retirement) associated with positive adjustment. Results suggest that the personal resources of older age and more education and early-life adverse experiences can have cascading effects over the lifespan to impact well-being in both positive and negative ways.

  18. Positive Adjustment Among American Repatriated Prisoners of the Vietnam War: Modeling the Long-Term Effects of Captivity

    PubMed Central

    King, Daniel W.; King, Lynda A.; Park, Crystal L.; Lee, Lewina O.; Kaiser, Anica Pless; Spiro, Avron; Moore, Jeffrey L.; Kaloupek, Danny G.; Keane, Terence M.

    2015-01-01

    A longitudinal lifespan model of factors contributing to later-life positive adjustment was tested on 567 American repatriated prisoners from the Vietnam War. This model encompassed demographics at time of capture and attributes assessed after return to the U.S. (reports of torture and mental distress) and approximately 3 decades later (later-life stressors, perceived social support, positive appraisal of military experiences, and positive adjustment). Age and education at time of capture and physical torture were associated with repatriation mental distress, which directly predicted poorer adjustment 30 years later. Physical torture also had a salutary effect, enhancing later-life positive appraisals of military experiences. Later-life events were directly and indirectly (through concerns about retirement) associated with positive adjustment. Results suggest that the personal resources of older age and more education and early-life adverse experiences can have cascading effects over the lifespan to impact well-being in both positive and negative ways. PMID:26693100

  19. Integrating microbial diversity in soil carbon dynamic models parameters

    NASA Astrophysics Data System (ADS)

    Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie

    2015-04-01

    Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten

  20. Parameter Perturbations with the GFDL Model: Smoothness and Uncertainty

    NASA Astrophysics Data System (ADS)

    Zamboni, L.; Jacob, R. L.; Neelin, J.; Kotamarthi, V. R.; Held, I.; Zhao, M.; Williams, T. J.; McWilliams, J. C.; Moore, T. L.; Wilde, M.; Nangia, N.

    2013-12-01

    We found that smoothness characterizes the response of global precipitation to perturbations of 6 parameters related to cloud physics and circulation in 50-year AMIP simulations performed with the GFDL model at 1x1 degree resolution. Specifically, the AGCM depends quadratically to parameters (Fig.1a). Linearization of the derivative of a cost function (the globally averaged squared difference between model and observations; here illustrated for the entrainment rate) up to at least the 2nd order around the standard case (eo=10) proofs necessary for optimization purposes to correctly predict where the optimum value lies (Fig.1b), and reflects the relevance of the non linearity of the response. The linearization also provides indications about desirable changes in the parameters' values for regional optimization, which may be locally different from that of the global average. Uncertainty of precipitation varies from -9 to 6% of the model's standard version and is highest for the ice-fall-speed in stratiform clouds and the entrainment in convective clouds, which are the parameters with the widest range of possible values (Fig.2). The smooth behavior and a quantified measure of the sensitivity we report here are the backbones for the design of computationally effective multi-parameter perturbations and model optimization, which ultimately improve the reliability of AGCMs simulations Smoothness and optimum parameter value for the entrainment rate. a) Root mean squared error and fits based on values eo=[8,16] and extrapolated over eo=[4,6]; b) derivative of the cost function computed at different levels of precision in the linearization (blue, green and black lines) and numerically using 1) the quadratic fit n the expression of the cost function (red line) and 2) only AGCM output (pink line). Note that the linearization determines the correct value of the minimum without using any information about model's output in that point: the quadratic fit is based on data

  1. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  2. Soil-Related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  3. Realistic uncertainties on Hapke model parameters from photometric measurement

    NASA Astrophysics Data System (ADS)

    Schmidt, Frédéric; Fernando, Jennifer

    2015-11-01

    The single particle phase function describes the manner in which an average element of a granular material diffuses the light in the angular space usually with two parameters: the asymmetry parameter b describing the width of the scattering lobe and the backscattering fraction c describing the main direction of the scattering lobe. Hapke proposed a convenient and widely used analytical model to describe the spectro-photometry of granular materials. Using a compilation of the published data, Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) recently studied the relationship of b and c for natural examples and proposed the hockey stick relation (excluding b > 0.5 and c > 0.5). For the moment, there is no theoretical explanation for this relationship. One goal of this article is to study a possible bias due to the retrieval method. We expand here an innovative Bayesian inversion method in order to study into detail the uncertainties of retrieved parameters. On Emission Phase Function (EPF) data, we demonstrate that the uncertainties of the retrieved parameters follow the same hockey stick relation, suggesting that this relation is due to the fact that b and c are coupled parameters in the Hapke model instead of a natural phenomena. Nevertheless, the data used in the Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) compilation generally are full Bidirectional Reflectance Diffusion Function (BRDF) that are shown not to be subject to this artifact. Moreover, the Bayesian method is a good tool to test if the sampling geometry is sufficient to constrain the parameters (single scattering albedo, surface roughness, b, c , opposition effect). We performed sensitivity tests by mimicking various surface scattering properties and various single image-like/disk resolved image, EPF-like and BRDF-like geometric sampling conditions. The second goal of this article is to estimate the favorable geometric conditions for an accurate estimation of photometric parameters in order to provide

  4. Nonlinear relative-proportion-based route adjustment process for day-to-day traffic dynamics: modeling, equilibrium and stability analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng

    2016-11-01

    Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.

  5. Advances in Antarctic Mantle and Crustal Physics and Implications for Ice Sheet Models and Isostatic Adjustment Measurements

    NASA Astrophysics Data System (ADS)

    Ivins, Erik; Adhikari, Surendra; Seroussi, Helene; Larour, Eric; Wiens, Douglas; Scheinert, Mirko; Csatho, Beata; James, Thomas; Nyblade, Andrew

    2015-04-01

    The problem of improving both solid Earth structure models and assembling an appropriate tectonic framework for Antarctica is challenging for many reasons. The vast ice sheet cover is just one item in a long list of difficult observational challenges faced by solid Earth scientists. The ice sheet has a unique potential for causing relatively rapid global sea-level rise over the next couple of hundred years. This potential provides great impetus for employing extraordinary efforts to improve our knowledge of the thermo-mechanical properties and of the mass and energy transport systems operating in the underlying solid Earth. In this presentation we discuss the role of seismic mapping of the mantle and crust, heat flux inferences, models and measurements as they affect the state of the ice sheet and the predictions of present-day and future solid Earth glacial isostatic adjustment, global and regional sea-level variability. To illustrate the sensitivity to solid Earth parameters for deriving a model temperature at the base of the ice sheet, Tb, we have computed the differences between two models to produce maps of δTb, the differential temperature to the melting point at the base of the ice sheet using the ISSM 3-D Stokes flow model. A 'cold' case (with surface crustal heat flux qGHF = 40 mW/m^2) is compared to a 'hot' geothermal flux case (qGHF = 60 mW/m^2). Differences of δ Tb = 6 - 10 °C are predicted between the two heat flux assumptions, and these have associated differences in predicted ice velocities of a factor of 1.8-3.6. We also explore the hypothesis of a mantle plume, and its potential compatibility or incompatibility with basal ice sheet conditions in West Antarctica.

  6. Individualization of the parameters of the three-elements Windkessel model using carotid pulse signal

    NASA Astrophysics Data System (ADS)

    Żyliński, Marek; Niewiadomski, Wiktor; Strasz, Anna; GÄ siorowska, Anna; Berka, Martin; Młyńczak, Marcel; Cybulski, Gerard

    2015-09-01

    The haemodynamics of the arterial system can be described by the three-elements Windkessel model. As it is a lumped model, it does not account for pulse wave propagation phenomena: pulse wave velocity, reflection, and pulse pressure profile changes during propagation. The Modelflowmethod uses this model to calculate stroke volume and total peripheral resistance (TPR) from pulse pressure obtained from finger; the reliability of this method is questioned. The model parameters are: aortic input impedance (Zo), TPR, and arterial compliance (Cw). They were obtained from studies of human aorta preparation. Individual adjustment is performed based on the subject's age and gender. As Cw is also affected by diseases, this may lead to inaccuracies. Moreover, the Modelflowmethod transforms the pulse pressure recording from the finger (Finapres©) into a remarkably different pulse pressure in the aorta using a predetermined transfer function — another source of error. In the present study, we indicate a way to include in the Windkessel model information obtained by adding carotid pulse recording to the finger pressure measurement. This information allows individualization of the values of Cw and Zo. It also seems reasonable to utilize carotid pulse, which better reflects aortic pressure, to individualize the transfer function. Despite its simplicity, the Windkessel model describes essential phenomena in the arterial system remarkably well; therefore, it seems worthwhile to check whether individualization of its parameters would increase the reliability of results obtained with this model.

  7. A new glacial isostatic adjustment model of the Innuitian Ice Sheet, Arctic Canada

    NASA Astrophysics Data System (ADS)

    Simon, K. M.; James, T. S.; Dyke, A. S.

    2015-07-01

    A reconstruction of the Innuitian Ice Sheet (IIS) is developed that incorporates first-order constraints on its spatial extent and history as suggested by regional glacial geology studies. Glacial isostatic adjustment modelling of this ice sheet provides relative sea-level predictions that are in good agreement with measurements of post-glacial sea-level change at 18 locations. The results indicate peak thicknesses of the Innuitian Ice Sheet of approximately 1600 m, up to 400 m thicker than the minimum peak thicknesses estimated from glacial geology studies, but between approximately 1000 to 1500 m thinner than the peak thicknesses present in previous GIA models. The thickness history of the best-fit Innuitian Ice Sheet model developed here, termed SJD15, differs from the ICE-5G reconstruction and provides an improved fit to sea-level measurements from the lowland sector of the ice sheet. Both models provide a similar fit to relative sea-level measurements from the alpine sector. The vertical crustal motion predictions of the best-fit IIS model are in general agreement with limited GPS observations, after correction for a significant elastic crustal response to present-day ice mass change. The new model provides approximately 2.7 m equivalent contribution to global sea-level rise, an increase of +0.6 m compared to the Innuitian portion of ICE-5G. SJD15 is qualitatively more similar to the recent ICE-6G ice sheet reconstruction, which appears to also include more spatially extensive ice cover in the Innuitian region than ICE-5G.

  8. Investigation of RADTRAN Stop Model input parameters for truck stops

    SciTech Connect

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-03-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops.

  9. Development of a GIA (Glacial Isostatic Adjustment) - Fault Model of Greenland

    NASA Astrophysics Data System (ADS)

    Steffen, R.; Lund, B.

    2015-12-01

    The increase in sea level due to climate change is an intensely discussed phenomenon, while less attention is being paid to the change in earthquake activity that may accompany disappearing ice masses. The melting of the Greenland Ice Sheet, for example, induces changes in the crustal stress field, which could result in the activation of existing faults and the generation of destructive earthquakes. Such glacially induced earthquakes are known to have occurred in Fennoscandia 10,000 years ago. Within a new project ("Glacially induced earthquakes in Greenland", start in October 2015), we will analyse the potential for glacially induced earthquakes in Greenland due to the ongoing melting. The objectives include the development of a three-dimensional (3D) subsurface model of Greenland, which is based on geologic, geophysical and geodetic datasets, and which also fulfils the boundary conditions of glacial isostatic adjustment (GIA) modelling. Here we will present an overview of the project, including the most recently available datasets and the methodologies needed for model construction and the simulation of GIA induced earthquakes.

  10. Optimising muscle parameters in musculoskeletal models using Monte Carlo simulation.

    PubMed

    Reed, Erik B; Hanson, Andrea M; Cavanagh, Peter R

    2015-01-01

    The use of musculoskeletal simulation software has become a useful tool for modelling joint and muscle forces during human activity, including in reduced gravity because direct experimentation is difficult. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler™ (San Clemente, CA, USA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces but no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. The rectus femoris was predicted to peak at 60.1% activation in the same test case compared to 19.2% activation using default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  11. Prediction of mortality rates using a model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  12. Spherical Harmonics Functions Modelling of Meteorological Parameters in PWV Estimation

    NASA Astrophysics Data System (ADS)

    Deniz, Ilke; Mekik, Cetin; Gurbuz, Gokhan

    2016-08-01

    Aim of this study is to derive temperature, pressure and humidity observations using spherical harmonics modelling and to interpolate for the derivation of precipitable water vapor (PWV) of TUSAGA-Active stations in the test area encompassing 38.0°-42.0° northern latitudes and 28.0°-34.0° eastern longitudes of Turkey. In conclusion, the meteorological parameters computed by using GNSS observations for the study area have been modelled with a precision of ±1.74 K in temperature, ±0.95 hPa in pressure and ±14.88 % in humidity. Considering studies on the interpolation of meteorological parameters, the precision of temperature and pressure models provide adequate solutions. This study funded by the Scientific and Technological Research Council of Turkey (TUBITAK) (The Estimation of Atmospheric Water Vapour with GPS Project, Project No: 112Y350).

  13. Comparison of Cone Model Parameters for Halo Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Na, Hyeonock; Moon, Y.-J.; Jang, Soojeong; Lee, Kyoung-Sun; Kim, Hae-Yeon

    2013-11-01

    Halo coronal mass ejections (HCMEs) are a major cause of geomagnetic storms, hence their three-dimensional structures are important for space weather. We compare three cone models: an elliptical-cone model, an ice-cream-cone model, and an asymmetric-cone model. These models allow us to determine three-dimensional parameters of HCMEs such as radial speed, angular width, and the angle [ γ] between sky plane and cone axis. We compare these parameters obtained from three models using 62 HCMEs observed by SOHO/LASCO from 2001 to 2002. Then we obtain the root-mean-square (RMS) error between the highest measured projection speeds and their calculated projection speeds from the cone models. As a result, we find that the radial speeds obtained from the models are well correlated with one another ( R > 0.8). The correlation coefficients between angular widths range from 0.1 to 0.48 and those between γ-values range from -0.08 to 0.47, which is much smaller than expected. The reason may be the different assumptions and methods. The RMS errors between the highest measured projection speeds and the highest estimated projection speeds of the elliptical-cone model, the ice-cream-cone model, and the asymmetric-cone model are 376 km s-1, 169 km s-1, and 152 km s-1. We obtain the correlation coefficients between the location from the models and the flare location ( R > 0.45). Finally, we discuss strengths and weaknesses of these models in terms of space-weather application.

  14. Parameter identification for a suction-dependent plasticity model

    NASA Astrophysics Data System (ADS)

    Simoni, L.; Schrefler, B. A.

    2001-03-01

    In this paper, the deterministic parameter identification procedure proposed in a companion paper is applied to suction-dependent elasto-plasticity problems. A mathematical model for such type of problems is firstly presented, then it is applied to the parameter identification using laboratory data. The identification procedure is applied in a second example to exploitation of a gas reservoir. The effects of the extraction of underground fluids appear during and after quite long periods of time and strongly condition the decision to profit or not of the natural resources. Identification procedures can be very useful tools for reliable long-term predictions.

  15. Inversion of canopy reflectance models for estimation of vegetation parameters

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.

    1987-01-01

    One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.

  16. Enhancing debris flow modeling parameters integrating Bayesian networks

    NASA Astrophysics Data System (ADS)

    Graf, C.; Stoffel, M.; Grêt-Regamey, A.

    2009-04-01

    Applied debris-flow modeling requires suitably constraint input parameter sets. Depending on the used model, there is a series of parameters to define before running the model. Normally, the data base describing the event, the initiation conditions, the flow behavior, the deposition process and mainly the potential range of possible debris flow events in a certain torrent is limited. There are only some scarce places in the world, where we fortunately can find valuable data sets describing event history of debris flow channels delivering information on spatial and temporal distribution of former flow paths and deposition zones. Tree-ring records in combination with detailed geomorphic mapping for instance provide such data sets over a long time span. Considering the significant loss potential associated with debris-flow disasters, it is crucial that decisions made in regard to hazard mitigation are based on a consistent assessment of the risks. This in turn necessitates a proper assessment of the uncertainties involved in the modeling of the debris-flow frequencies and intensities, the possible run out extent, as well as the estimations of the damage potential. In this study, we link a Bayesian network to a Geographic Information System in order to assess debris-flow risk. We identify the major sources of uncertainty and show the potential of Bayesian inference techniques to improve the debris-flow model. We model the flow paths and deposition zones of a highly active debris-flow channel in the Swiss Alps using the numerical 2-D model RAMMS. Because uncertainties in run-out areas cause large changes in risk estimations, we use the data of flow path and deposition zone information of reconstructed debris-flow events derived from dendrogeomorphological analysis covering more than 400 years to update the input parameters of the RAMMS model. The probabilistic model, which consistently incorporates this available information, can serve as a basis for spatial risk

  17. Nonconservative force model parameter estimation strategy for TOPEX/Poseidon precision orbit determination

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Marshall, J. A.

    1992-01-01

    The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.

  18. Nonconservative force model parameter estimation strategy for TOPEX/Poseidon precision orbit determination

    NASA Astrophysics Data System (ADS)

    Luthcke, S. B.; Marshall, J. A.

    1992-11-01

    The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.

  19. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  20. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  1. Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1996-01-01

    Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.

  2. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  3. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data.

  4. Case-mix adjustment of the National CAHPS benchmarking data 1.0: a violation of model assumptions?

    PubMed Central

    Elliott, M N; Swartz, R; Adams, J; Spritzer, K L; Hays, R D

    2001-01-01

    OBJECTIVE: To compare models for the case-mix adjustment of consumer reports and ratings of health care. DATA SOURCES: The study used the Consumer Assessment of Health Plans (CAHPS) survey 1.0 National CAHPS Benchmarking Database data from 54 commercial and 31 Medicaid health plans from across the United States: 19,541 adults (age > or = 18 years) in commercial plans and 8,813 adults in Medicaid plans responded regarding their own health care, and 9,871 Medicaid adults responded regarding the health care of their minor children. STUDY DESIGN: Four case-mix models (no adjustment; self-rated health and age; health, age, and education; and health, age, education, and plan interactions) were compared on 21 ratings and reports regarding health care for three populations (adults in commercial plans, adults in Medicaid plans, and children in Medicaid plans). The magnitude of case-mix adjustments, the effects of adjustments on plan rankings, and the homogeneity of these effects across plans were examined. DATA EXTRACTION: All ratings and reports were linearly transformed to a possible range of 0 to 100 for comparability. PRINCIPAL FINDINGS: Case-mix adjusters, especially self-rated health, have substantial effects, but these effects vary substantially from plan to plan, a violation of standard case-mix assumptions. CONCLUSION: Case-mix adjustment of CAHPS data needs to be re-examined, perhaps by using demographically stratified reporting or by developing better measures of response bias. PMID:11482589

  5. ESTIMATION OF EMISSION ADJUSTMENTS FROM THE APPLICATION OF FOUR-DIMENSIONAL DATA ASSIMILATION TO PHOTOCHEMICAL AIR QUALITY MODELING. (R826372)

    EPA Science Inventory

    Four-dimensional data assimilation applied to photochemical air quality modeling is used to suggest adjustments to the emissions inventory of the Atlanta, Georgia metropolitan area. In this approach, a three-dimensional air quality model, coupled with direct sensitivity analys...

  6. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  7. Tradeoffs among watershed model calibration targets for parameter estimation

    NASA Astrophysics Data System (ADS)

    Price, Katie; Purucker, S. Thomas; Kraemer, Stephen R.; Babendreier, Justin E.

    2012-10-01

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation fit, while modified Nash-Sutcliffe efficiency (MNS) emphasizes lower flows, and the ratio of the simulated to observed standard deviations (RSD) prioritizes flow variability. We investigated tradeoffs of calibrating streamflow on three standard objective functions (NSE, MNS, and RSD), as well as a multiobjective function aggregating these three targets to simultaneously address a range of flow conditions, for calibration of the Soil and Water Assessment Tool (SWAT) daily streamflow simulations in two watersheds. A suite of objective functions was explored to select a minimally redundant set of metrics addressing a range of flow characteristics. After each pass of 2001 simulations, an iterative informal likelihood procedure was used to subset parameter ranges. The ranges from each best-fit simulation set were used for model validation. Values for optimized parameters vary among calibrations using different objective functions, which underscores the importance of linking modeling objectives to calibration target selection. The simulation set approach yielded validated models of similar quality as seen with a single best-fit parameter set, with the added benefit of uncertainty estimations. Our approach represents a novel compromise between equifinality-based approaches and Pareto optimization. Combining the simulation set approach with the multiobjective function was demonstrated to be a practicable and flexible approach for model calibration, which can be readily modified to suit modeling goals, and is not model or location specific.

  8. Optimizing Parameters of Process-Based Terrestrial Ecosystem Model with Particle Filter

    NASA Astrophysics Data System (ADS)

    Ito, A.

    2014-12-01

    Present terrestrial ecosystem models still contain substantial uncertainties, as model intercomparison studies have shown, because of poor model constraint by observational data. So, development of advanced methodology of data-model fusion, or data-assimilation, is an important task to reduce the uncertainties and improve model predictability. In this study, I apply the Particle filter (or Sequential Monte Carlo filer) to optimize parameters of a process-based terrestrial ecosystem model (VISIT). The Particle filter is one of the data-assimilation methods, in which probability distribution of model state is approximated by many samples of parameter set (i.e., particle). This is a computationally intensive method and applicable to nonlinear systems; this is an advantage of the method in comparison with other techniques like Ensemble Kalman filter and variational method. At several sites, I used flux measurement data of atmosphere-ecosystem CO2 exchange in sequential and non-sequential manners. In the sequential data assimilation, a time-series data at 30-min or daily steps were used to optimize gas-exchange-related parameters; this method would be also effective to assimilate satellite observational data. On the other hand, in the non-sequential case, annual or long-term mean budget was adjusted to observations; this method would be also effective to assimilate carbon stock data. Although there remain technical issues (e.g., appropriate number of particles and likelihood function), I demonstrate that the Partile filter is an effective method of data-assimilation for process-based models, enhancing collaboration between field and model researchers.

  9. Estimating Regression Parameters in an Extended Proportional Odds Model

    PubMed Central

    Chen, Ying Qing; Hu, Nan; Cheng, Su-Chun; Musoke, Philippa; Zhao, Lue Ping

    2012-01-01

    The proportional odds model may serve as a useful alternative to the Cox proportional hazards model to study association between covariates and their survival functions in medical studies. In this article, we study an extended proportional odds model that incorporates the so-called “external” time-varying covariates. In the extended model, regression parameters have a direct interpretation of comparing survival functions, without specifying the baseline survival odds function. Semiparametric and maximum likelihood estimation procedures are proposed to estimate the extended model. Our methods are demonstrated by Monte-Carlo simulations, and applied to a landmark randomized clinical trial of a short course Nevirapine (NVP) for mother-to-child transmission (MTCT) of human immunodeficiency virus type-1 (HIV-1). Additional application includes analysis of the well-known Veterans Administration (VA) Lung Cancer Trial. PMID:22904583

  10. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  11. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking

    PubMed Central

    Jha, Sumit K.; Jha, Susmit; Langmead, Christopher J.

    2015-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866

  12. [A study of coordinates transform iterative fitting method to extract bio-impedance model parameters bio-impedance model parameters].

    PubMed

    Zhou, Liming; Yang, Yuxing; Yuan, Shiying

    2006-02-01

    A new algorithm, the coordinates transform iterative optimizing method based on the least square curve fitting model, is presented. This arithmetic is used for extracting the bio-impedance model parameters. It is superior to other methods, for example, its speed of the convergence is quicker, and its calculating precision is higher. The objective to extract the model parameters, such as Ri, Re, Cm and alpha, has been realized rapidly and accurately. With the aim at lowering the power consumption, decreasing the price and improving the price-to-performance ratio, a practical bio-impedance measure system with double CPUs has been built. It can be drawn from the preliminary results that the intracellular resistance Ri increased largely with an increase in working load during sitting, which reflects the ischemic change of lower limbs.

  13. Neural mass model parameter identification for MEG/EEG

    NASA Astrophysics Data System (ADS)

    Kybic, Jan; Faugeras, Olivier; Clerc, Maureen; Papadopoulo, Théo

    2007-03-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have excellent time resolution. However, the poor spatial resolution and small number of sensors do not permit to reconstruct a general spatial activation pattern. Moreover, the low signal to noise ratio (SNR) makes accurate reconstruction of a time course also challenging. We therefore propose to use constrained reconstruction, modeling the relevant part of the brain using a neural mass model: There is a small number of zones that are considered as entities, neurons within a zone are assumed to be activated simultaneously. The location and spatial extend of the zones as well as the interzonal connection pattern can be determined from functional MRI (fMRI), diffusion tensor MRI (DTMRI), and other anatomical and brain mapping observation techniques. The observation model is linear, its deterministic part is known from EEG/MEG forward modeling, the statistics of the stochastic part can be estimated. The dynamics of the neural model is described by a moderate number of parameters that can be estimated from the recorded EEG/MEG data. We explicitly model the long-distance communication delays. Our parameters have physiological meaning and their plausible range is known. Since the problem is highly nonlinear, a quasi-Newton optimization method with random sampling and automatic success evaluation is used. The actual connection topology can be identified from several possibilities. The method was tested on synthetic data as well as on true MEG somatosensory-evoked field (SEF) data.

  14. Complex Parameter Landscape for a Complex Neuron Model

    PubMed Central

    Achard, Pablo; De Schutter, Erik

    2006-01-01

    The electrical activity of a neuron is strongly dependent on the ionic channels present in its membrane. Modifying the maximal conductances from these channels can have a dramatic impact on neuron behavior. But the effect of such modifications can also be cancelled out by compensatory mechanisms among different channels. We used an evolution strategy with a fitness function based on phase-plane analysis to obtain 20 very different computational models of the cerebellar Purkinje cell. All these models produced very similar outputs to current injections, including tiny details of the complex firing pattern. These models were not completely isolated in the parameter space, but neither did they belong to a large continuum of good models that would exist if weak compensations between channels were sufficient. The parameter landscape of good models can best be described as a set of loosely connected hyperplanes. Our method is efficient in finding good models in this complex landscape. Unraveling the landscape is an important step towards the understanding of functional homeostasis of neurons. PMID:16848639

  15. Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.

  16. The definition of input parameters for modelling of energetic subsystems

    NASA Astrophysics Data System (ADS)

    Ptacek, M.

    2013-06-01

    This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  17. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  18. Empirical flow parameters : a tool for hydraulic model validity

    USGS Publications Warehouse

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  19. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  20. Adjusting for mortality effects in chronic toxicity testing: Mixture model approach

    SciTech Connect

    Wang, S.C.D.; Smith, E.P.

    2000-01-01

    Chronic toxicity tests, such as the Ceriodaphnia dubia 7-d test are typically analyzed using standard statistical methods such as analysis of variance or regression. Recent research has emphasized the use of Poisson regression or more generalized regression for the analysis of the fecundity data from these studies. A possible problem in using standard statistical techniques is that mortality may occur from toxicant effects as well as reduced fecundity. A mixture model that accounts for fecundity and mortality is proposed for the analysis of data arising from these studies. Inferences about key parameters in the model are discussed. A joint estimate of the inhibition concentration is proposed based on the model. Confidence interval estimations via the bootstrap method is discussed. An example is given for a study involving copper and mercury.

  1. Numerical model for thermal parameters in optical materials

    NASA Astrophysics Data System (ADS)

    Sato, Yoichi; Taira, Takunori

    2016-04-01

    Thermal parameters of optical materials, such as thermal conductivity, thermal expansion, temperature coefficient of refractive index play a decisive role for the thermal design inside laser cavities. Therefore, numerical value of them with temperature dependence is quite important in order to develop the high intense laser oscillator in which optical materials generate excessive heat across mode volumes both of lasing output and optical pumping. We already proposed a novel model of thermal conductivity in various optical materials. Thermal conductivity is a product of isovolumic specific heat and thermal diffusivity, and independent modeling of these two figures should be required from the viewpoint of a clarification of physical meaning. Our numerical model for thermal conductivity requires one material parameter for specific heat and two parameters for thermal diffusivity in the calculation of each optical material. In this work we report thermal conductivities of various optical materials as Y3Al5O12 (YAG), YVO4 (YVO), GdVO4 (GVO), stoichiometric and congruent LiTaO3, synthetic quartz, YAG ceramics and Y2O3 ceramics. The dependence on Nd3+-doping in laser gain media in YAG, YVO and GVO is also studied. This dependence can be described by only additional three parameters. Temperature dependence of thermal expansion and temperature coefficient of refractive index for YAG, YVO, and GVO: these are also included in this work for convenience. We think our numerical model is quite useful for not only thermal analysis in laser cavities or optical waveguides but also the evaluation of physical properties in various transparent materials.

  2. Incorporating Model Parameter Uncertainty into Prostate IMRT Treatment Planning

    DTIC Science & Technology

    2005-04-01

    Distribution Unlimited The views, opinions and/or findings contained in this report are those of the author( s ) and should not be construed as an...Incorporating Model Parameter Uncertainty into Prostate DAMD17-03-1-0019 IMRT Treatment Planning 6. AUTHOR( S ) David Y. Yang, Ph.D. 7. PERFORMING ORGANIZA TION...NAME( S ) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Stanford University REPORT NUMBER Stanford, California 94305-5401 E-Mail: yong@reyes .stanford

  3. Modeling and Extraction of Parasitic Thermal Conductance and Intrinsic Model Parameters of Thermoelectric Modules

    NASA Astrophysics Data System (ADS)

    Sim, Minseob; Park, Hyunbin; Kim, Shiho

    2015-11-01

    We have presented both modeling and a method for extracting parasitic thermal conductance as well as intrinsic device parameters of a thermoelectric module based on information readily available in vendor datasheets. An equivalent circuit model that is compatible with circuit simulators is derived, followed by a methodology for extracting both intrinsic and parasitic model parameters. For the first time, the effective thermal resistance of the ceramic and copper interconnect layers of the thermoelectric module is extracted using only parameters listed in vendor datasheets. In the experimental condition, including under condition of varying electric current, the parameters extracted from the model accurately reproduce the performance of commercial thermoelectric modules.

  4. Nonlocal order parameters for the 1D Hubbard model.

    PubMed

    Montorsi, Arianna; Roncaglia, Marco

    2012-12-07

    We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point U(c)=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at U(c). The behavior of the parity correlators is captured by an effective free spinless fermion model.

  5. Bayesian Estimation in the One-Parameter Latent Trait Model.

    DTIC Science & Technology

    1980-03-01

    3 MASSACHUSETTS LNIV AMHERST LAB OF PSYCHOMETRIC AND -- ETC F/G 12/1 BAYESIAN ESTIMATION IN THE ONE-PARA1ETER LATENT TRAIT MODEL. (U) MAR 80 H...TEST CHART VVNN lfl’ ,. [’ COD BAYESIAN ESTIMATION IN THE ONE-PARAMETER LATENT TRAIT MODEL 0 wtHAR IHARAN SWA I NATHAN AND JANICE A. GIFFORD Research...block numbef) latent trait theory Bayesain estimation 20. ABSTRACT (Continue on reveso aide If neceaar and identlfy by Nock mambe) ,-When several

  6. Estimation of kinetic model parameters in fluorescence optical diffusion tomography.

    PubMed

    Milstein, Adam B; Webb, Kevin J; Bouman, Charles A

    2005-07-01

    We present a technique for reconstructing the spatially dependent dynamics of a fluorescent contrast agent in turbid media. The dynamic behavior is described by linear and nonlinear parameters of a compartmental model or some other model with a deterministic functional form. The method extends our previous work in fluorescence optical diffusion tomography by parametrically reconstructing the time-dependent fluorescent yield. The reconstruction uses a Bayesian framework and parametric iterative coordinate descent optimization, which is closely related to Gauss-Seidel methods. We demonstrate the method with a simulation study.

  7. Nonlocal Order Parameters for the 1D Hubbard Model

    NASA Astrophysics Data System (ADS)

    Montorsi, Arianna; Roncaglia, Marco

    2012-12-01

    We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point Uc=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at Uc. The behavior of the parity correlators is captured by an effective free spinless fermion model.

  8. Systematic parameter estimation for PEM fuel cell models

    NASA Astrophysics Data System (ADS)

    Carnes, Brian; Djilali, Ned

    The problem of parameter estimation is considered for the case of mathematical models for polymer electrolyte membrane fuel cells (PEMFCs). An algorithm for nonlinear least squares constrained by partial differential equations is defined and applied to estimate effective membrane conductivity, exchange current densities and oxygen diffusion coefficients in a one-dimensional PEMFC model for transport in the principal direction of current flow. Experimental polarization curves are fitted for conventional and low current density PEMFCs. Use of adaptive mesh refinement is demonstrated to increase the computational efficiency.

  9. An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model

    NASA Astrophysics Data System (ADS)

    Purcell, A.; Tregoning, P.; Dehecq, A.

    2016-05-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.

  10. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  11. Comparison of Two Foreign Body Retrieval Devices with Adjustable Loops in a Swine Model

    SciTech Connect

    Konya, Andras

    2006-12-15

    The purpose of the study was to compare two similar foreign body retrieval devices, the Texan{sup TM} (TX) and the Texan LONGhorn{sup TM} (TX-LG), in a swine model. Both devices feature a {<=}30-mm adjustable loop. Capture times and total procedure times for retrieving foreign bodies from the infrarenal aorta, inferior vena cava, and stomach were compared. All attempts with both devices (TX, n = 15; TX-LG, n = 14) were successful. Foreign bodies in the vasculature were captured quickly using both devices (mean {+-} SD, 88 {+-} 106 sec for TX vs 67 {+-} 42 sec for TX-LG) with no significant difference between them. The TX-LG, however, allowed significantly better capture times than the TX in the stomach (p = 0.022), Overall, capture times for the TX-LG were significantly better than for the TX (p = 0.029). There was no significant difference between the total procedure times in any anatomic region. TX-LG performed significantly better than the TX in the stomach and therefore overall. The better torque control and maneuverability of TX-LG resulted in better performance in large anatomic spaces.

  12. Order-parameter model for unstable multilane traffic flow

    NASA Astrophysics Data System (ADS)

    Lubashevsky, Ihor A.; Mahnke, Reinhard

    2000-11-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the ``free flow <--> synchronized mode <--> jam'' phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the ``many-body'' effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the ``one-particle'' distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of ``fast'' drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the ``free flow <--> synchronized motion'' phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.

  13. Accelerated gravitational wave parameter estimation with reduced order modeling.

    PubMed

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  14. Impact of GNSS Orbit Modeling on Reference Frame Parameters

    NASA Astrophysics Data System (ADS)

    Arnold, Daniel; Meindl, Michael; Lutz, Simon; Steigenberger, Peter; Beutler, Gerhard; Dach, Rolf; Schaer, Stefan; Prange, Lars; Sosnica, Krzysztof; Jäggi, Adrian

    2015-04-01

    The Center for Orbit Determination in Europe (CODE) contributes with a re-processing solution covering the years 1994 to 2013 (IGS repro2 effort) to the next ITRF release. The measurements to the GLONASS satellites are included since January 2002 in a rigorously combined solution. Around the year 2008 the network of combined GPS/GLONASS tracking stations became truly global. Since December 2011, 24 GLONASS satellites are active in their nominal positions. Since then the re-processing series shows - as the CODE operational solution - spurious signals in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates. These signals grew creepingly with the increasing influence of GLONASS. The problems could be attributed to deficiencies of the Empirical CODE Orbit Model (ECOM) for the GLONASS satellites. Based on the GPS-only, GLONASS-only, and combined GPS/GLONASS observations of recent years we study the impact of different orbit parameterizations on geodynamically relevant parameters, namely on ERPs, geocenter coordinates, and station coordinates. We also asses the quality of the GNSS orbits by measuring the orbit misclosures at the day boundaries and by validating the orbits using satellite laser ranging observations. We present an updated ECOM, which substantially reduces spurious signals in the estimated parameters in 1-day and in 3-day solutions.

  15. Accelerated Gravitational Wave Parameter Estimation with Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Canizares, Priscilla; Field, Scott E.; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-01

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ˜30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ˜70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  16. Effect of the Guessing Parameter on the Estimation of the Item Discrimination and Difficulty Parameters When Three-Parameter Logistic Model Is Assumed.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…

  17. Modelling rock-avalanche induced impact waves: Sensitivity of the model chains to model parameters

    NASA Astrophysics Data System (ADS)

    Schaub, Yvonne; Huggel, Christian

    2014-05-01

    New lakes are forming in high-mountain areas all over the world due to glacier recession. Often they will be located below steep, destabilized flanks and are therefore exposed to impacts from rock-/ice-avalanches. Several events worldwide are known, where an outburst flood has been triggered by such an impact. In regions such as in the European Alps or in the Cordillera Blanca in Peru, where valley bottoms are densely populated, these far-travelling, high-magnitude events can result in major disasters. Usually natural hazards are assessed as single hazardous processes, for the above mentioned reasons, however, development of assessment and reproduction methods of the hazardous process chain for the purpose of hazard map generation have to be brought forward. A combination of physical process models have already been suggested and illustrated by means of lake outburst in the Cordillera Blanca, Peru, where on April 11th 2010 an ice-avalanche of approx. 300'000m3 triggered an impact wave, which overtopped the 22m freeboard of the rock-dam for 5 meters and caused and outburst flood which travelled 23 km to the city of Carhuaz. We here present a study, where we assessed the sensitivity of the model chain from ice-avalanche and impact wave to single parameters considering rock-/ice-avalanche modeling by RAMMS and impact wave modeling by IBER. Assumptions on the initial rock-/ice-avalanche volume, calibration of the friction parameters in RAMMS and assumptions on erosion considered in RAMMS were parameters tested regarding their influence on overtopping parameters that are crucial for outburst flood modeling. Further the transformation of the RAMMS-output (flow height and flow velocities on the shoreline of the lake) into an inflow-hydrograph for IBER was also considered a possible source of uncertainties. Overtopping time, volume, and wave height as much as mean and maximum discharge were considered decisive parameters for the outburst flood modeling and were therewith

  18. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  19. A modified inverse procedure for calibrating parameters in a land subsidence model and its field application in Shanghai, China

    NASA Astrophysics Data System (ADS)

    Luo, Yue; Ye, Shujun; Wu, Jichun; Wang, Hanmei; Jiao, Xun

    2016-05-01

    Land-subsidence prediction depends on an appropriate subsidence model and the calibration of its parameter values. A modified inverse procedure is developed and applied to calibrate five parameters in a compacting confined aquifer system using records of field data from vertical extensometers and corresponding hydrographs. The inverse procedure of COMPAC (InvCOMPAC) has been used in the past for calibrating vertical hydraulic conductivity of the aquitards, nonrecoverable and recoverable skeletal specific storages of the aquitards, skeletal specific storage of the aquifers, and initial preconsolidation stress within the aquitards. InvCOMPAC is modified to increase robustness in this study. There are two main differences in the modified InvCOMPAC model (MInvCOMPAC). One is that field data are smoothed before diagram analysis to reduce local oscillation of data and remove abnormal data points. A robust locally weighted regression method is applied to smooth the field data. The other difference is that the Newton-Raphson method, with a variable scale factor, is used to conduct the computer-based inverse adjustment procedure. MInvCOMPAC is then applied to calibrate parameters in a land subsidence model of Shanghai, China. Five parameters of aquifers and aquitards at 15 multiple-extensometer sites are calibrated. Vertical deformation of sedimentary layers can be predicted by the one-dimensional COMPAC model with these calibrated parameters at extensometer sites. These calibrated parameters could also serve as good initial values for parameters of three-dimensional regional land subsidence models of Shanghai.

  20. Constraint of fault parameters inferred from nonplanar fault modeling

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Madariaga, Raul; Fukuyama, Eiichi

    2003-02-01

    We study the distribution of initial stress and frictional parameters for the 28 June 1992 Landers, California, earthquake through dynamic rupture simulation along a nonplanar fault system. We find that observational evidence of large slip distribution near the ground surface requires large nonzero cohesive forces in the depth-dependent friction law. This is the only way that stress can accumulate and be released at shallow depths. We then study the variation of frictional parameters along the strike of the fault. For this purpose we mapped into our segmented fault model the initial stress heterogeneity inverted by Peyrat et al. [2001] using a planar fault model. Simulations with this initial stress field improved the overall fit of the rupture process to that inferred from kinematic inversions, and also improved the fit to the ground motion observed in Southern California. In order to obtain this fit, we had to introduce an additional variations of frictional parameters along the fault. The most important is a weak Kickapoo fault and a strong Johnson Valley fault.

  1. [Temperature dependence of parameters of plant photosynthesis models: a review].

    PubMed

    Borjigidai, Almaz; Yu, Gui-Rui

    2013-12-01

    This paper reviewed the progress on the temperature response models of plant photosynthesis. Mechanisms involved in changes in the photosynthesis-temperature curve were discussed based on four parameters, intercellular CO2 concentration, activation energy of the maximum rate of RuBP (ribulose-1,5-bisphosphate) carboxylation (V (c max)), activation energy of the rate of RuBP regeneration (J(max)), and the ratio of J(max) to V(c max) All species increased the activation energy of V(c max) with increasing growth temperature, while other parameters changed but differed among species, suggesting the activation energy of V(c max) might be the most important parameter for the temperature response of plant photosynthesis. In addition, research problems and prospects were proposed. It's necessary to combine the photosynthesis models at foliage and community levels, and to investigate the mechanism of plants in response to global change from aspects of leaf area, solar radiation, canopy structure, canopy microclimate and photosynthetic capacity. It would benefit the understanding and quantitative assessment of plant growth, carbon balance of communities and primary productivity of ecosystems.

  2. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  3. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    NASA Astrophysics Data System (ADS)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  4. Anisotropic effects on constitutive model parameters of aluminum alloys

    NASA Astrophysics Data System (ADS)

    Brar, Nachhatter S.; Joshi, Vasant S.

    2012-03-01

    Simulation of low velocity impact on structures or high velocity penetration in armor materials heavily rely on constitutive material models. Model constants are determined from tension, compression or torsion stress-strain at low and high strain rates at different temperatures. These model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH) to accurately simulate fragment impact on structural components made of high strength 7075-T651 aluminum alloy. Johnson- Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration into 1' thick Al-7075-T651plates. When simulation go well beyond minor parameter tweaking and experimental results show drastically different behavior it becomes important to determine constitutive parameters from the actual material used in impact/penetration experiments. To investigate anisotropic effects on the yield/flow stress of this alloy quasi-static and high strain rate tensile tests were performed on specimens fabricated in the longitudinal "L", transverse "T", and thickness "TH" directions of 1' thick Al7075 Plate. While flow stress at a strain rate of ~1/s as well as ~1100/s in the thickness and transverse directions are lower than the longitudinal direction. The flow stress in the bar was comparable to flow stress in the longitudinal direction of the plate. Fracture strain data from notched tensile specimens fabricated in the L, T, and Thickness directions of 1' thick plate are used to derive fracture constants.

  5. An assessment of the ICE6G_C (VM5A) glacial isostatic adjustment model

    NASA Astrophysics Data System (ADS)

    Purcell, Anthony; Tregoning, Paul; Dehecq, Amaury

    2016-04-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a) [Peltier et al., 2015, Argus et al. 2014] is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology and, of course, geodynamics (Earth rheology studies). In this presentation I will assess some aspects of the ICE6G_C(VM5a) model and the accompanying published data sets. I will demonstrate that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Further, the published spherical harmonic coefficients - which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA) - will be shown to contain excessive power for degree ≥ 90, to be physically implausible and to not represent accurately the ICE6G_C(VM5a) model. The excessive power in the high degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. [2011] is applied but, when correct Stokes' coefficients are used, the empirical relationship will be shown to produce excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. [2011]. Finally, a global radial velocity field for the present-day GIA signal, and corresponding Stoke's coefficients will be presented for the ICE6GC ice model history using the VM5a rheology model. These results have been obtained using the ANU group's CALSEA software package and can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals without any of the shortcomings of the previously published data-sets. We denote the new data sets ICE6G_ANU.

  6. Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics

    SciTech Connect

    Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu

    2012-01-01

    While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.

  7. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  8. New finite-range droplet mass model and equation-of-state parameters.

    PubMed

    Möller, Peter; Myers, William D; Sagawa, Hiroyuki; Yoshida, Satoshi

    2012-02-03

    The parameters in the macroscopic droplet part of the finite-range droplet model (FRDM) are related to the properties of the equation of state. In the FRDM (1992) version, the optimization of the model parameters was not sufficiently sensitive to variations of the compressibility constant K and the density-symmetry constant L to allow their determination. In the new, more accurate FRDM-2011a adjustment of the model constants to new and more accurate experimental masses allows the determination of L together with the symmetry-energy constant J. The optimization is still not sensitive to K which is therefore fixed at K=240  MeV. Our results are J=32.5±0.5  MeV and L=70±15  MeV and a considerably improved mass-model accuracy σ=0.5700  MeV, with respect to the 2003 Atomic Mass Evaluation (AME2003) for FRDM-2011a, compared to σ=0.669  MeV for FRDM (1992).

  9. Modelling of some parameters from thermoelectric power plants

    NASA Astrophysics Data System (ADS)

    Popa, G. N.; Diniş, C. M.; Deaconu, S. I.; Maksay, Şt; Popa, I.

    2016-02-01

    Paper proposing new mathematical models for the main electrical parameters (active power P, reactive power Q of power supplies) and technological (mass flow rate of steam M from boiler and dust emission E from the output of precipitator) from a thermoelectric power plants using industrial plate-type electrostatic precipitators with three sections used in electrical power plants. The mathematical models were used experimental results taken from industrial facility, from boiler and plate-type electrostatic precipitators with three sections, and has used the least squares method for their determination. The modelling has been used equations of degree 1, 2 and 3. The equations were determined between dust emission depending on active power of power supplies and mass flow rate of steam from boiler, and, also, depending on reactive power of power supplies and mass flow rate of steam from boiler. These equations can be used to control the process from electrostatic precipitators.

  10. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    NASA Astrophysics Data System (ADS)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  11. Glacial isostatic adjustment associated with the Barents Sea ice sheet: A modelling inter-comparison

    NASA Astrophysics Data System (ADS)

    Auriac, A.; Whitehouse, P. L.; Bentley, M. J.; Patton, H.; Lloyd, J. M.; Hubbard, A.

    2016-09-01

    The 3D geometrical evolution of the Barents Sea Ice Sheet (BSIS), particularly during its late-glacial retreat phase, remains largely ambiguous due to the paucity of direct marine- and terrestrial-based evidence constraining its horizontal and vertical extent and chronology. One way of validating the numerous BSIS reconstructions previously proposed is to collate and apply them under a wide range of Earth models and to compare prognostic (isostatic) output through time with known relative sea-level (RSL) data. Here we compare six contrasting BSIS load scenarios via a spherical Earth system model and derive a best-fit, χ2 parameter using RSL data from the four main terrestrial regions within the domain: Svalbard, Franz Josef Land, Novaya Zemlya and northern Norway. Poor χ2 values allow two load scenarios to be dismissed, leaving four that agree well with RSL observations. The remaining four scenarios optimally fit the RSL data when combined with Earth models that have an upper mantle viscosity of 0.2-2 × 1021 Pa s, while there is less sensitivity to the lithosphere thickness (ranging from 71 to 120 km) and lower mantle viscosity (spanning 1-50 × 1021 Pa s). GPS observations are also compared with predictions of present-day uplift across the Barents Sea. Key locations where relative sea-level and GPS data would prove critical in constraining future ice-sheet modelling efforts are also identified.

  12. On the parameters of absorbing layers for shallow water models

    NASA Astrophysics Data System (ADS)

    Modave, Axel; Deleersnijder, Éric; Delhez, Éric J. M.

    2010-02-01

    Absorbing/sponge layers used as boundary conditions for ocean/marine models are examined in the context of the shallow water equations with the aim to minimize the reflection of outgoing waves at the boundary of the computational domain. The optimization of the absorption coefficient is not an issue in continuous models, for the reflection coefficient of outgoing waves can then be made as small as we please by increasing the absorption coefficient. The optimization of the parameters of absorbing layers is therefore a purely discrete problem. A balance must be found between the efficient damping of outgoing waves and the limited spatial resolution with which the resulting spatial gradients must be described. Using a one-dimensional model as a test case, the performances of various spatial distributions of the absorption coefficient are compared. Two shifted hyperbolic distributions of the absorption coefficient are derived from theoretical considerations for a pure propagative and a pure advective problems. These distribution show good performances. Their free parameter has a well-defined interpretation and can therefore be determined on a physical basis. The properties of the two shifted hyperbolas are illustrated using the classical two-dimensional problems of the collapse of a Gaussian-shaped mound of water and of its advection by a mean current. The good behavior of the resulting boundary scheme remains when a full non-linear dynamics is taken into account.

  13. Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Brar, Nachhatter; Joshi, Vasant

    2011-06-01

    Simulation of low velocity impact on structures or high velocity penetration in armor materials heavily rely on constitutive material models. The model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH) to accurately simulate fragment impact on structural components made of high strength 7075-T651 aluminum alloys. Johnson-Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration into 1' thick Al-7075-T651plates. When simulations go well beyond minor parameter tweaking and experimental results are drastically different it is important to determine constitutive parameters from the actual material used in impact/penetration experiments. To investigate anisotropic effects on the yield/flow stress of this alloy we performed quasi-static and high strain rate tensile tests on specimens fabricated in the longitudinal, transverse, and thickness directions of 1' thick Al7075-T651 plate. Flow stresses at a strain rate of ~1100/s in the longitudinal and transverse direction are similar around 670MPa and decreases to 620 MPa in the thickness direction. These data are lower than the flow stress of 760 MPa measured in Al7075-T651 bar stock.

  14. Rejection, Feeling Bad, and Being Hurt: Using Multilevel Modeling to Clarify the Link between Peer Group Aggression and Adjustment

    ERIC Educational Resources Information Center

    Rulison, Kelly L.; Gest, Scott D.; Loken, Eric; Welsh, Janet A.

    2010-01-01

    The association between affiliating with aggressive peers and behavioral, social and psychological adjustment was examined. Students initially in 3rd, 4th, and 5th grade (N = 427) were followed biannually through 7th grade. Students' peer-nominated groups were identified. Multilevel modeling was used to examine the independent contributions of…

  15. Patterns of Children's Adrenocortical Reactivity to Interparental Conflict and Associations with Child Adjustment: A Growth Mixture Modeling Approach

    ERIC Educational Resources Information Center

    Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.

    2013-01-01

    Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…

  16. Internal Working Models and Adjustment of Physically Abused Children: The Mediating Role of Self-Regulatory Abilities

    ERIC Educational Resources Information Center

    Hawkins, Amy L.; Haskett, Mary E.

    2014-01-01

    Background: Abused children's internal working models (IWM) of relationships are known to relate to their socioemotional adjustment, but mechanisms through which negative representations increase vulnerability to maladjustment have not been explored. We sought to expand the understanding of individual differences in IWM of abused children and…

  17. Adolescent Sibling Relationship Quality and Adjustment: Sibling Trustworthiness and Modeling, as Factors Directly and Indirectly Influencing These Associations

    ERIC Educational Resources Information Center

    Gamble, Wendy C.; Yu, Jeong Jin; Kuehn, Emily D.

    2011-01-01

    The main goal of this study was to examine the direct and moderating effects of trustworthiness and modeling on adolescent siblings' adjustment. Data were collected from 438 families including a mother, a younger sibling in fifth, sixth, or seventh grade (M = 11.6 years), and an older sibling (M = 14.3 years). Respondents completed Web-based…

  18. The Effectiveness of the Strength-Centered Career Adjustment Model for Dual-Career Women in Taiwan

    ERIC Educational Resources Information Center

    Wang, Yu-Chen; Tien, Hsiu-Lan Shelley

    2011-01-01

    The authors investigated the effectiveness of a Strength-Centered Career Adjustment Model for dual-career women (N = 28). Fourteen women in the experimental group received strength-centered career counseling for 6 to 8 sessions; the 14 women in the control group received test services in 1 to 2 sessions. All participants completed the Personal…

  19. Parameter optimization in differential geometry based solvation models

    PubMed Central

    Wang, Bao; Wei, G. W.

    2015-01-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304

  20. Important Scaling Parameters for Testing Model-Scale Helicopter Rotors

    NASA Technical Reports Server (NTRS)

    Singleton, Jeffrey D.; Yeager, William T., Jr.

    1998-01-01

    An investigation into the effects of aerodynamic and aeroelastic scaling parameters on model scale helicopter rotors has been conducted in the NASA Langley Transonic Dynamics Tunnel. The effect of varying Reynolds number, blade Lock number, and structural elasticity on rotor performance has been studied and the performance results are discussed herein for two different rotor blade sets at two rotor advance ratios. One set of rotor blades were rigid and the other set of blades were dynamically scaled to be representative of a main rotor design for a utility class helicopter. The investigation was con-densities permits the acquisition of data for several Reynolds and Lock number combinations.

  1. Estimating effective model parameters for heterogeneous unsaturated flow using error models for bias correction

    NASA Astrophysics Data System (ADS)

    Erdal, D.; Neuweiler, I.; Huisman, J. A.

    2012-06-01

    Estimates of effective parameters for unsaturated flow models are typically based on observations taken on length scales smaller than the modeling scale. This complicates parameter estimation for heterogeneous soil structures. In this paper we attempt to account for soil structure not present in the flow model by using so-called external error models, which correct for bias in the likelihood function of a parameter estimation algorithm. The performance of external error models are investigated using data from three virtual reality experiments and one real world experiment. All experiments are multistep outflow and inflow experiments in columns packed with two sand types with different structures. First, effective parameters for equivalent homogeneous models for the different columns were estimated using soil moisture measurements taken at a few locations. This resulted in parameters that had a low predictive power for the averaged states of the soil moisture if the measurements did not adequately capture a representative elementary volume of the heterogeneous soil column. Second, parameter estimation was performed using error models that attempted to correct for bias introduced by soil structure not taken into account in the first estimation. Three different error models that required different amounts of prior knowledge about the heterogeneous structure were considered. The results showed that the introduction of an error model can help to obtain effective parameters with more predictive power with respect to the average soil water content in the system. This was especially true when the dynamic behavior of the flow process was analyzed.

  2. System parameters for erythropoiesis control model: Comparison of normal values in human and mouse model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.

  3. Parameter estimation and analysis model selections in fluorescence correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Dong, Shiqing; Zhou, Jie; Ding, Xuemei; Wang, Yuhua; Xie, Shusen; Yang, Hongqin

    2016-10-01

    Fluorescence correlation spectroscopy (FCS) is a powerful technique that could provide high temporal resolution and detection for the diffusions of biomolecules at extremely low concentrations. The accuracy of this approach primarily depends on experimental condition requirements and the data analysis model. In this study, we have set up a confocal-based FCS system. And then we used a Rhodamine6G solution to calibrate the system and get the related parameters. An experimental measurement was carried out on one-component solution to evaluate the relationship between a certain number of molecules and concentrations. The results showed FCS system we built was stable and valid. Finally, a two-component solution experiment was carried out to show the importance of analysis model selection. It is a promising method for single molecular diffusion study in living cells.

  4. A Proportional Hazards Regression Model for the Sub-distribution with Covariates Adjusted Censoring Weight for Competing Risks Data

    PubMed Central

    HE, PENG; ERIKSSON, FRANK; SCHEIKE, THOMAS H.; ZHANG, MEI-JIE

    2015-01-01

    With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research (CIBMTR). Here cancer relapse and death in complete remission are two competing risks. PMID:27034534

  5. A novel criterion for determination of material model parameters

    NASA Astrophysics Data System (ADS)

    Andrade-Campos, A.; de-Carvalho, R.; Valente, R. A. F.

    2011-05-01

    Parameter identification problems have emerged due to the increasing demanding of precision in the numerical results obtained by Finite Element Method (FEM) software. High result precision can only be obtained with confident input data and robust numerical techniques. The determination of parameters should always be performed confronting numerical and experimental results leading to the minimum difference between them. However, the success of this task is dependent of the specification of the cost/objective function, defined as the difference between the experimental and the numerical results. Recently, various objective functions have been formulated to assess the errors between the experimental and computed data (Lin et al., 2002; Cao and Lin, 2008; among others). The objective functions should be able to efficiently lead the optimisation process. An ideal objective function should have the following properties: (i) all the experimental data points on the curve and all experimental curves should have equal opportunity to be optimised; and (ii) different units and/or the number of curves in each sub-objective should not affect the overall performance of the fitting. These two criteria should be achieved without manually choosing the weighting factors. However, for some non-analytical specific problems, this is very difficult in practice. Null values of experimental or numerical values also turns the task difficult. In this work, a novel objective function for constitutive model parameter identification is presented. It is a generalization of the work of Cao and Lin and it is suitable for all kinds of constitutive models and mechanical tests, including cyclic tests and Baushinger tests with null values.

  6. Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand

    NASA Astrophysics Data System (ADS)

    Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat

    2016-07-01

    The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the

  7. Variational methods to estimate terrestrial ecosystem model parameters

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  8. Moment Reconstruction and Moment-Adjusted Imputation When Exposure is Generated by a Complex, Nonlinear Random Effects Modeling Process

    PubMed Central

    Potgieter, Cornelis J.; Wei, Rubin; Kipnis, Victor; Freedman, Laurence S.; Carroll, Raymond J.

    2016-01-01

    Summary For the classical, homoscedastic measurement error model, moment reconstruction (Freedman et al., 2004, 2008) and moment-adjusted imputation (Thomas et al., 2011) are appealing, computationally simple imputation-like methods for general model fitting. Like classical regression calibration, the idea is to replace the unobserved variable subject to measurement error with a proxy that can be used in a variety of analyses. Moment reconstruction and moment-adjusted imputation differ from regression calibration in that they attempt to match multiple features of the latent variable, and also to match some of the latent variable’s relationships with the response and additional covariates. In this note, we consider a problem where true exposure is generated by a complex, nonlinear random effects modeling process, and develop analogues of moment reconstruction and moment-adjusted imputation for this case. This general model includes classical measurement errors, Berkson measurement errors, mixtures of Berkson and classical errors and problems that are not measurement error problems, but also cases where the data generating process for true exposure is a complex, nonlinear random effects modeling process. The methods are illustrated using the National Institutes of Health-AARP Diet and Health Study where the latent variable is a dietary pattern score called the Healthy Eating Index - 2005. We also show how our general model includes methods used in radiation epidemiology as a special case. Simulations are used to illustrate the methods. PMID:27061196

  9. A parameter model for dredge plume sediment source terms

    NASA Astrophysics Data System (ADS)

    Decrop, Boudewijn; De Mulder, Tom; Toorman, Erik; Sas, Marc

    2017-01-01

    , which is not available in all situations. For example, to allow correct representation of overflow plume dispersion in a real-time forecasting model, a fast assessment of the near-field behaviour is needed. For this reason, a semi-analytical parameter model has been developed that reproduces the near-field sediment dispersion obtained with the CFD model in a relatively accurate way. In this paper, this so-called grey-box model is presented.

  10. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  11. Incorporation of shuttle CCT parameters in computer simulation models

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    1990-01-01

    Computer simulations of shuttle missions have become increasingly important during recent years. The complexity of mission planning for satellite launch and repair operations which usually involve EVA has led to the need for accurate visibility and access studies. The PLAID modeling package used in the Man-Systems Division at Johnson currently has the necessary capabilities for such studies. In addition, the modeling package is used for spatial location and orientation of shuttle components for film overlay studies such as the current investigation of the hydrogen leaks found in the shuttle flight. However, there are a number of differences between the simulation studies and actual mission viewing. These include image blur caused by the finite resolution of the CCT monitors in the shuttle and signal noise from the video tubes of the cameras. During the course of this investigation the shuttle CCT camera and monitor parameters are incorporated into the existing PLAID framework. These parameters are specific for certain camera/lens combinations and the SNR characteristics of these combinations are included in the noise models. The monitor resolution is incorporated using a Gaussian spread function such as that found in the screen phosphors in the shuttle monitors. Another difference between the traditional PLAID generated images and actual mission viewing lies in the lack of shadows and reflections of light from surfaces. Ray tracing of the scene explicitly includes the lighting and material characteristics of surfaces. The results of some preliminary studies using ray tracing techniques for the image generation process combined with the camera and monitor effects are also reported.

  12. A Normalized Direct Approach for Estimating the Parameters of the Normal Ogive Three-Parameter Model for Ability Tests.

    ERIC Educational Resources Information Center

    Gugel, John F.

    A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…

  13. Bayesian or Non-Bayesian: A Comparison Study of Item Parameter Estimation in the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Gao, Furong; Chen, Lisue

    2005-01-01

    Through a large-scale simulation study, this article compares item parameter estimates obtained by the marginal maximum likelihood estimation (MMLE) and marginal Bayes modal estimation (MBME) procedures in the 3-parameter logistic model. The impact of different prior specifications on the MBME estimates is also investigated using carefully…

  14. Divorce Stress and Adjustment Model: Locus of Control and Demographic Predictors.

    ERIC Educational Resources Information Center

    Barnet, Helen Smith

    This study depicts the divorce process over three time periods: predivorce decision phase, divorce proper, and postdivorce. Research has suggested that persons with a more internal locus of control experience less intense and shorter intervals of stress during the divorce proper and better postdivorce adjustment than do persons with a more…

  15. Social Adjustment and Academic Achievement: A Predictive Model for Students with Diverse Academic and Behavior Competencies

    ERIC Educational Resources Information Center

    Ray, Corey E.; Elliott, Stephen N.

    2006-01-01

    This study examined the hypothesized relationship between social adjustment, as measured by perceived social support, self-concept, and social skills, and performance on academic achievement tests. Participants included 27 teachers and 77 fourth- and eighth-grade students with diverse academic and behavior competencies. Teachers were asked to…

  16. Extending the Integrated Model of Retirement Adjustment: Incorporating Mastery and Retirement Planning

    ERIC Educational Resources Information Center

    Donaldson, Tarryn; Earl, Joanne K.; Muratore, Alexa M.

    2010-01-01

    Extending earlier research, this study explores individual (e.g. demographic and health characteristics), psychosocial (e.g. mastery and planning) and organizational factors (e.g. conditions of workforce exit) influencing retirement adjustment. Survey data were collected from 570 semi-retired and retired men and women aged 45 years and older.…

  17. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    ERIC Educational Resources Information Center

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  18. A Structural Equation Modeling Approach to the Study of Stress and Psychological Adjustment in Emerging Adults

    ERIC Educational Resources Information Center

    Asberg, Kia K.; Bowers, Clint; Renk, Kimberly; McKinney, Cliff

    2008-01-01

    Today's society puts constant demands on the time and resources of all individuals, with the resulting stress promoting a decline in psychological adjustment. Emerging adults are not exempt from this experience, with an alarming number reporting excessive levels of stress and stress-related problems. As a result, the present study addresses the…

  19. Towards an Integrated Model of Individual, Psychosocial, and Organizational Predictors of Retirement Adjustment

    ERIC Educational Resources Information Center

    Wong, Jessica Y.; Earl, Joanne K.

    2009-01-01

    This cross-sectional study examines three predictors of retirement adjustment: individual (demographic and health), psychosocial (work centrality), and organizational (conditions of workforce exit). It also examines the effect of work centrality on post-retirement activity levels. Survey data was collected from 394 retirees (aged 45-93 years).…

  20. Models of Cultural Adjustment for Child and Adolescent Migrants to Australia: Internal Process and Situational Factors

    ERIC Educational Resources Information Center

    Sonderegger, Robi; Barrett, Paula M.; Creed, Peter A.

    2004-01-01

    Building on previous cultural adjustment profile work by Sonderegger and Barrett (2004), the aim of this study was to propose an organised structure for a number of single risk factors that have been linked to acculturative-stress in young migrants. In recognising that divergent situational characteristics (e.g., school level, gender, residential…

  1. Verification and adjustment of regional regression models for urban storm-runoff quality using data collected in Little Rock, Arkansas

    USGS Publications Warehouse

    Barks, C.S.

    1995-01-01

    Storm-runoff water-quality data were used to verify and, when appropriate, adjust regional regression models previously developed to estimate urban storm- runoff loads and mean concentrations in Little Rock, Arkansas. Data collected at 5 representative sites during 22 storms from June 1992 through January 1994 compose the Little Rock data base. Comparison of observed values (0) of storm-runoff loads and mean concentrations to the predicted values (Pu) from the regional regression models for nine constituents (chemical oxygen demand, suspended solids, total nitrogen, total ammonia plus organic nitrogen as nitrogen, total phosphorus, dissolved phosphorus, total recoverable copper, total recoverable lead, and total recoverable zinc) shows large prediction errors ranging from 63 to several thousand percent. Prediction errors for six of the regional regression models are less than 100 percent, and can be considered reasonable for water-quality models. Differences between 0 and Pu are due to variability in the Little Rock data base and error in the regional models. Where applicable, a model adjustment procedure (termed MAP-R-P) based upon regression with 0 against Pu was applied to improve predictive accuracy. For 11 of the 18 regional water-quality models, 0 and Pu are significantly correlated, that is much of the variation in 0 is explained by the regional models. Five of these 11 regional models consistently overestimate O; therefore, MAP-R-P can be used to provide a better estimate. For the remaining seven regional models, 0 and Pu are not significanfly correlated, thus neither the unadjusted regional models nor the MAP-R-P is appropriate. A simple estimator, such as the mean of the observed values may be used if the regression models are not appropriate. Standard error of estimate of the adjusted models ranges from 48 to 130 percent. Calibration results may be biased due to the limited data set sizes in the Little Rock data base. The relatively large values of

  2. Inverse Modeling of Hydrologic Parameters Using Surface Flux and Runoff Observations in the Community Land Model

    SciTech Connect

    Sun, Yu; Hou, Zhangshuan; Huang, Maoyi; Tian, Fuqiang; Leung, Lai-Yung R.

    2013-12-10

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  3. Relevant parameters in models of cell division control

    NASA Astrophysics Data System (ADS)

    Grilli, Jacopo; Osella, Matteo; Kennard, Andrew S.; Lagomarsino, Marco Cosentino

    2017-03-01

    A recent burst of dynamic single-cell data makes it possible to characterize the stochastic dynamics of cell division control in bacteria. Different models were used to propose specific mechanisms, but the links between them are poorly explored. The lack of comparative studies makes it difficult to appreciate how well any particular mechanism is supported by the data. Here, we describe a simple and generic framework in which two common formalisms can be used interchangeably: (i) a continuous-time division process described by a hazard function and (ii) a discrete-time equation describing cell size across generations (where the unit of time is a cell cycle). In our framework, this second process is a discrete-time Langevin equation with simple physical analogues. By perturbative expansion around the mean initial size (or interdivision time), we show how this framework describes a wide range of division control mechanisms, including combinations of time and size control, as well as the constant added size mechanism recently found to capture several aspects of the cell division behavior of different bacteria. As we show by analytical estimates and numerical simulations, the available data are described precisely by the first-order approximation of this expansion, i.e., by a "linear response" regime for the correction of size fluctuations. Hence, a single dimensionless parameter defines the strength and action of the division control against cell-to-cell variability (quantified by a single "noise" parameter). However, the same strength of linear response may emerge from several mechanisms, which are distinguished only by higher-order terms in the perturbative expansion. Our analytical estimate of the sample size needed to distinguish between second-order effects shows that this value is close to but larger than the values of the current datasets. These results provide a unified framework for future studies and clarify the relevant parameters at play in the control of

  4. Extracting Structure Parameters of Dimers for Molecular Tunneling Ionization Model

    NASA Astrophysics Data System (ADS)

    Song-Feng, Zhao; Fang, Huang; Guo-Li, Wang; Xiao-Xin, Zhou

    2016-03-01

    We determine structure parameters of the highest occupied molecular orbital (HOMO) of 27 dimers for the molecular tunneling ionization (so called MO-ADK) model of Tong et al. [Phys. Rev. A 66 (2002) 033402]. The molecular wave functions with correct asymptotic behavior are obtained by solving the time-independent Schrödinger equation with B-spline functions and molecular potentials which are numerically created using the density functional theory. We examine the alignment-dependent tunneling ionization probabilities from MO-ADK model for several molecules by comparing with the molecular strong-field approximation (MO-SFA) calculations. We show the molecular Perelomov-Popov-Terent'ev (MO-PPT) can successfully give the laser wavelength dependence of ionization rates (or probabilities). Based on the MO-PPT model, two diatomic molecules having valence orbital with antibonding systems (i.e., Cl2, Ne2) show strong ionization suppression when compared with their corresponding closest companion atoms. Supported by National Natural Science Foundation of China under Grant Nos. 11164025, 11264036, 11465016, 11364038, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20116203120001, and the Basic Scientific Research Foundation for Institution of Higher Learning of Gansu Province

  5. Sound propagation and absorption in foam - A distributed parameter model.

    NASA Technical Reports Server (NTRS)

    Manson, L.; Lieberman, S.

    1971-01-01

    Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.

  6. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  7. Modelling Hydrological Processes in Presence of Uncertain or Unreliable Forcing Data and Land Surface Parameters

    NASA Astrophysics Data System (ADS)

    Gusev, Ye. M.; Nasonova, O. N.; Dzhogan, L. Ya.

    2009-04-01

    by, respectively, 13 and 60 one-degree grid cells connected by river networks. Runoff was modeled for each cell and then transformed by a river routing model to simulate streamflow at a river basin outlet. The land surface parameters for each grid cell were taken from the one-degree global data sets of the Second Global Soil Wetness Project (GSWP-2). Seven soil and vegetation parameters were selected for calibration. Meteorological forcing data were taken from the GSWP-2 3-hour global data sets for the period of 1983-1995. To reduce the systematic errors in precipitation and incoming radiation adjustment factors were applied separately for liquid and solid precipitation, as well as for incoming shortwave and longwave radiation. Their values can be obtained by calibration. Thus there were 11 calibrated parameters. Calibration was performed by means of stochastic optimization technique using daily streamflow measured during 1986-1990 at the Malonisogorskaya gauging station for the Mezen River and at the Oksino gauging station for the Pechora River. Model validation using optimal values of the calibrated parameters was performed for the next 5 years. The Nash and Sutcliffe efficiency of daily runoff simulation for the validation period (for both rivers) was within the range 0.75-0.82, the correlation coefficient equaled to 0.88-0.91, the bias did not exceed 6%. Thus it can be concluded, that the physically-based LSM SWAP can be used as a quite functional tool for hydrological modelling in the case of poor, uncertain and unreliable input data.

  8. Dynamic Conceptual Model of Sediment Fluxes Underlying Numerical Modelling of Spatial and Temporal Variability and Adjustment to Environmental Change

    NASA Astrophysics Data System (ADS)

    Hooke, J.

    2015-12-01

    It is essential that a strong conceptual model underlies numerical modelling of basin fluxes and is inclusive of all factors and routeways through the system. Even under stable environmental conditions river fluxes in large basins vary spatially and temporally. Spatial variations arise due to location in the basin, relation to sources and connectivity, and due to morphology, boundary resistance and hydraulics of successive reaches. Temporal variations at a range of scales, from seasonal to decadal, occur within averaged 'stable' conditions, which produce changes in morphology and flux and subsequent feedback effects. Sediment flux in a reach can differ between similar peak magnitude events, depending on duration, season, connectivity and supply state, and existing morphology. Autogenic processes such as channel pattern and position changes, vegetation changes, and floodplain cyclicity also take place within the system. The major drivers of change at decadal-centennial timescales are assumed to be climate, land use cover and practices, and direct catchment and channel modification. Different parts of the system will have different trajectories of adjustment, depending on their location and spatial relation to connectivity within the system and on the reach morphological and resistance characteristics. These will govern the rate and extent of transmission of changes. The changes will also be influenced by the occurrence and sequence of flow events and their feedback effects, in relation to changing thresholds produced by the response to the environmental changes. It is essential that the underlying dynamics and inherent variability are recognised in numerical modelling and river management and that spatial sequencing of changes and their feedbacks are incorporated. The challenge is to produce quantifiable relations of the rate or propagation of changes through a basin given spatial variability of reach characteristics, under dynamic flow scenarios.

  9. How do attachment dimensions affect bereavement adjustment? A mediation model of continuing bonds.

    PubMed

    Yu, Wei; He, Li; Xu, Wei; Wang, Jianping; Prigerson, Holly G

    2016-04-30

    The current study aims to examine mechanisms underlying the impact of attachment dimensions on bereavement adjustment. Bereaved mainland Chinese participants (N=247) completed anonymous, retrospective, self-report surveys assessing attachment dimensions, continuing bonds (CB), grief symptoms and posttraumatic growth (PTG). Results demonstrated that attachment anxiety predicted grief symptoms via externalized CB and predicted PTG via internalized CB at the same time, whereas attachment avoidance positively predicted grief symptoms via externalized CB but negatively predicted PTG directly. Findings suggested that individuals with a high level of attachment anxiety could both suffer from grief and obtain posttraumatic growth after loss, but it depended on which kind of CB they used. By contrast, attachment avoidance was associated with a heightened risk of maladaptive bereavement adjustment. Future grief therapy may encourage the bereaved to establish CB with the deceased and gradually shift from externalized CB to internalized CB.

  10. An age adjustment of very young children of India, 1981 and reappraisal of fertility and mortality rates--A model approach.

    PubMed

    Mukhopadhyay, B K

    1986-01-01

    Several approaches were made by actuaries and demographers to correct and smooth the Indian age distribution with special emphasis on population in age group 0-4 at different points of time. The present analysis conceives the life table stationary population (using the West Model) as 'reference standard'. 2 parameters were estimated from a regression equation using the proportion of population in age groups 5-14 and 60-plus as independent variables and that in 0-4 as the dependent variable. The corrected census proportions in age group 0-4 obtained from the regression model under certain assumptions for the 14 major states and India seem to be consistent and to have slightly lower values than those of the 1971 adjusted data. Moreover, unadjusted and adjusted proportions in 5-14 and 60 plus do not show any significant difference between the predicted values. Using the corrected population aged 0-4 years, the average annual birth and death rates during the 5 year period preceeding the 1981 census have been estimated for those 14 states and India as well. The estimated birth rates so obtained were further adjusted using an appropriate factor from the West Model and Indian life table survival ratios. The final estimates seem to be consistent, except for a few, and to have slightly higher values than those of earlier estimates. As the present analysis is based on a 5% sample and confined to only 14 states, it is proposed to study the same for all the states and India in greater detail using full count data on age distribution and actul life tables as and when available.

  11. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    SciTech Connect

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes.

  12. Modeling and optimization of adjustable multifrequency axially polarized multilayer composite cylindrical transducer

    NASA Astrophysics Data System (ADS)

    Wang, Jianjun; Shi, Zhifei; Song, Gangbing

    2015-04-01

    A novel adjustable multifrequency axially polarized multilayer composite cylindrical transducer is developed in this paper. The transducer is composed of two parts: an actuator part and a sensor part. Each part is considered as a multilayer piezoelectric/elastic composite structure. The actuator part is utilized to actuate the transducer, while the senor part is used to adjust its dynamic characteristics through connecting to an external electric resistance. Based on the plane stress assumption, the radial vibration of this new kind of transducer is analyzed, and its input electric admittance is derived analytically. Comparisons with the earlier works are conducted to validate the theoretical solution. Furthermore, numerical analysis is performed to study the effects of the external electric resistance on the transducer’s dynamic characteristics, such as resonance and anti-resonance frequencies, as well as the corresponding electromechanical coupling factor. Numerical results show that the multifrequency cylindrical transducer can be designed through adjusting the external electric resistance and the ratio of piezoelectric layer numbers between the actuator part and the sensor part. In addition, the optimized transducer can be proposed at the matching electric resistance. The proposed cylindrical transducer plays an important role in designing the cymbal transducer, which can be used in underwater sound projectors and ultrasonic radiators.

  13. Sensitivity of numerical dispersion modeling to explosive source parameters

    SciTech Connect

    Baskett, R.L. ); Cederwall, R.T. )

    1991-02-13

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs.

  14. Fundamental parameters of pulsating stars from atmospheric models

    NASA Astrophysics Data System (ADS)

    Barcza, S.

    2006-12-01

    A purely photometric method is reviewed to determine distance, mass, equilibrium temperature, and luminosity of pulsating stars by using model atmospheres and hydrodynamics. T Sex is given as an example: on the basis of Kurucz atmospheric models and UBVRI (in both Johnson and Kron-Cousins systems) data, variation of angular diameter, effective temperature, and surface gravity is derived as a function of phase, mass M=(0.76± 0.09) M⊙, distance d=530± 67 pc, Rmax=2.99R⊙, Rmin=2.87R⊙, magnitude averaged visual absolute brightness < MVmag>=1.17± 0.26 mag are found. During a pulsation cycle four standstills of the atmosphere are pointed out indicating the occurrence of two shocks in the atmosphere. The derived equilibrium temperature Teq=7781 K and luminosity (28.3± 8.8)L⊙ locate T Sex on the blue edge of the instability strip in a theoretical Hertzsprung-Russell diagram. The differences of the physical parameters from this study and Liu & Janes (1990) are discussed.

  15. Mechanical models for insect locomotion: stability and parameter studies

    NASA Astrophysics Data System (ADS)

    Schmitt, John; Holmes, Philip

    2001-08-01

    We extend the analysis of simple models for the dynamics of insect locomotion in the horizontal plane, developed in [Biol. Cybern. 83 (6) (2000) 501] and applied to cockroach running in [Biol. Cybern. 83 (6) (2000) 517]. The models consist of a rigid body with a pair of effective legs (each representing the insect’s support tripod) placed intermittently in ground contact. The forces generated may be prescribed as functions of time, or developed by compression of a passive leg spring. We find periodic gaits in both cases, and show that prescribed (sinusoidal) forces always produce unstable gaits, unless they are allowed to rotate with the body during stride, in which case a (small) range of physically unrealistic stable gaits does exist. Stability is much more robust in the passive spring case, in which angular momentum transfer at touchdown/liftoff can result in convergence to asymptotically straight motions with bounded yaw, fore-aft and lateral velocity oscillations. Using a non-dimensional formulation of the equations of motion, we also develop exact and approximate scaling relations that permit derivation of gait characteristics for a range of leg stiffnesses, lengths, touchdown angles, body masses and inertias, from a single gait family computed at ‘standard’ parameter values.

  16. Simultaneous model discrimination and parameter estimation in dynamic models of cellular systems

    PubMed Central

    2013-01-01

    Background Model development is a key task in systems biology, which typically starts from an initial model candidate and, involving an iterative cycle of hypotheses-driven model modifications, leads to new experimentation and subsequent model identification steps. The final product of this cycle is a satisfactory refined model of the biological phenomena under study. During such iterative model development, researchers frequently propose a set of model candidates from which the best alternative must be selected. Here we consider this problem of model selection and formulate it as a simultaneous model selection and parameter identification problem. More precisely, we consider a general mixed-integer nonlinear programming (MINLP) formulation for model selection and identification, with emphasis on dynamic models consisting of sets of either ODEs (ordinary differential equations) or DAEs (differential algebraic equations). Results We solved the MINLP formulation for model selection and identification using an algorithm based on Scatter Search (SS). We illustrate the capabilities and efficiency of the proposed strategy with a case study considering the KdpD/KdpE system regulating potassium homeostasis in Escherichia coli. The proposed approach resulted in a final model that presents a better fit to the in silico generated experimental data. Conclusions The presented MINLP-based optimization approach for nested-model selection and identification is a powerful methodology for model development in systems biology. This strategy can be used to perform model selection and parameter estimation in one single step, thus greatly reducing the number of experiments and computations of traditional modeling approaches. PMID:23938131

  17. NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia

    NASA Astrophysics Data System (ADS)

    Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas

    2016-04-01

    Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.

  18. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    SciTech Connect

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  19. Observation model and parameter partials for the JPL geodetic GPS modeling software GPSOMC

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Border, J. S.

    1988-01-01

    The physical models employed in GPSOMC and the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities in the current report with their counterparts in the computer programs. There are no basic model revisions, with the exceptions of an improved ocean loading model and some new options for handling clock parametrization. Such misprints as were discovered were corrected. Further revisions include modeling improvements and assurances that the model description is in accord with the current software.

  20. Observation model and parameter partials for the JPL geodetic (GPS) modeling software 'GPSOMC'

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1990-01-01

    The physical models employed in GPSOMC, the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities with their counterparts in the computer programs. The present version is the second revision of the original document which it supersedes. The modeling is expanded to provide the option of using Cartesian station coordinates; parameters for the time rates of change of universal time and polar motion are also introduced.

  1. What's the Risk? A Simple Approach for Estimating Adjusted Risk Measures from Nonlinear Models Including Logistic Regression

    PubMed Central

    Kleinman, Lawrence C; Norton, Edward C

    2009-01-01

    Objective To develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common. Study Design Regression risk analysis estimates were compared with internal standards as well as with Mantel–Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR. Data Collection Data sets produced using Monte Carlo simulations. Principal Findings Regression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases. Conclusions Regression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case–control studies, particularly when outcomes are common or effect size is large. PMID:18793213

  2. Parameter Set Uniqueness and Confidence Limits in Model Identification of Insulin Transport Models for Simulation Data Diabetic Patient Models

    PubMed Central

    Farmer, Terry G.; Edgar, Thomas F.; Peppas, Nicholas A.

    2011-01-01

    Background The use of patient models describing the dynamics of glucose, insulin, and possibly other metabolic species associated with glucose regulation allows diabetes researchers to gain insights regarding novel therapies via simulation. However, such models are only useful when model parameters are effectively estimated with patient data. Methods The use of least squares to effectively estimate model parameters from simulation data was investigated by observing factors that influence the accuracy of estimates for the model parameters from a data set generated using a model with known parameters. An intravenous insulin pharmacokinetic model was used to generate the insulin response of a patient with type 1 diabetes mellitus to a series of step changes in the insulin infusion rate from an external insulin pump. The effects of using user-defined gradient and Hessian calculations on both parameter estimations and the 95% confidence limits of the estimated parameter sets were investigated. Results Estimations performed by either solver without user-supplied quantities were highly dependent on the initial guess of the parameter set, with relative confidence limits greater than ±100%. The use of user-defined quantities allowed the one-compartment model parameters to be effectively estimated. While the two-compartment model parameter estimation still depended on the initial parameter set specification, confidence limits were decreased, and all fits to simulation data were very good. Conclusions The use of user-defined gradients and Hessian matrices results in more accurate parameter estimations for insulin transport models. Improved estimation could result in more accurate simulations for use in glucose control system design. PMID:18260776

  3. Rock thermal conductivity as key parameter for geothermal numerical models

    NASA Astrophysics Data System (ADS)

    Di Sipio, Eloisa; Chiesa, Sergio; Destro, Elisa; Galgaro, Antonio; Giaretta, Aurelio; Gola, Gianluca; Manzella, Adele

    2013-04-01

    The geothermal energy applications are undergoing a rapid development. However, there are still several challenges in the successful exploitation of geothermal energy resources. In particular, a special effort is required to characterize the thermal properties of the ground along with the implementation of efficient thermal energy transfer technologies. This paper focuses on understanding the quantitative contribution that geosciences can receive from the characterization of rock thermal conductivity. The thermal conductivity of materials is one of the main input parameters in geothermal modeling since it directly controls the steady state temperature field. An evaluation of this thermal property is required in several fields, such as Thermo-Hydro-Mechanical multiphysics analysis of frozen soils, designing ground source heat pumps plant, modeling the deep geothermal reservoirs structure, assessing the geothermal potential of subsoil. Aim of this study is to provide original rock thermal conductivity values useful for the evaluation of both low and high enthalpy resources at regional or local scale. To overcome the existing lack of thermal conductivity data of sedimentary, igneous and metamorphic rocks, a series of laboratory measurements has been performed on several samples, collected in outcrop, representative of the main lithologies of the regions included in the VIGOR Project (southern Italy). Thermal properties tests were carried out both in dry and wet conditions, using a C-Therm TCi device, operating following the Modified Transient Plane Source method.Measurements were made at standard laboratory conditions on samples both water saturated and dehydrated with a fan-forced drying oven at 70 ° C for 24 hr, for preserving the mineral assemblage and preventing the change of effective porosity. Subsequently, the samples have been stored in an air-conditioned room while bulk density, solid volume and porosity were detected. The measured thermal conductivity

  4. Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters

    NASA Astrophysics Data System (ADS)

    Caraballo, R.

    2016-11-01

    According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.

  5. Inducible mouse models illuminate parameters influencing epigenetic inheritance.

    PubMed

    Wan, Mimi; Gu, Honggang; Wang, Jingxue; Huang, Haichang; Zhao, Jiugang; Kaundal, Ravinder K; Yu, Ming; Kushwaha, Ritu; Chaiyachati, Barbara H; Deerhake, Elizabeth; Chi, Tian

    2013-02-01

    Environmental factors can stably perturb the epigenome of exposed individuals and even that of their offspring, but the pleiotropic effects of these factors have posed a challenge for understanding the determinants of mitotic or transgenerational inheritance of the epigenetic perturbation. To tackle this problem, we manipulated the epigenetic states of various target genes using a tetracycline-dependent transcription factor. Remarkably, transient manipulation at appropriate times during embryogenesis led to aberrant epigenetic modifications in the ensuing adults regardless of the modification patterns, target gene sequences or locations, and despite lineage-specific epigenetic programming that could reverse the epigenetic perturbation, thus revealing extraordinary malleability of the fetal epigenome, which has implications for 'metastable epialleles'. However, strong transgenerational inheritance of these perturbations was observed only at transgenes integrated at the Col1a1 locus, where both activating and repressive chromatin modifications were heritable for multiple generations; such a locus is unprecedented. Thus, in our inducible animal models, mitotic inheritance of epigenetic perturbation seems critically dependent on the timing of the perturbation, whereas transgenerational inheritance additionally depends on the location of the perturbation. In contrast, other parameters examined, particularly the chromatin modification pattern and DNA sequence, appear irrelevant.

  6. Chiropractic Adjustment

    MedlinePlus

    ... structural alignment and improve your body's physical function. Low back pain, neck pain and headache are the most common ... treated. Chiropractic adjustment can be effective in treating low back pain, although much of the research done shows only ...

  7. Adjustment disorder

    MedlinePlus

    ... from other people Skipped heartbeats and other physical complaints Trembling or twitching To have adjustment disorder, you ... ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer Support Get email updates Subscribe to RSS Follow ...

  8. Analysis of the temporal dynamics of model performance and parameter sensitivity for hydrological models

    NASA Astrophysics Data System (ADS)

    Reusser, D.; Zehe, E.

    2009-04-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. We present a method for such a hydrological model performance assessment with a high temporal resolution. Information about possible relevant processes during times with distinct model performance is obtained from parameter sensitivity analysis - also with high temporal resolution. We illustrate the combined approach of temporally resolved model performance and parameter sensitivity for a rainfall-runoff modeling case study. The headwater catchment of the Wilde Weisseritz in the eastern Ore mountains is simulated with the conceptual model WaSiM-ETH. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs) and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The temporally resolved sensitivity analysis is based on the FAST algorithm. The final outcome of the proposed method is a time series of the occurrence of dominant error types as well as a time series of the relative parameter sensitivity. For the two case studies analyzed here, 6 error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors. The parameter sensitivity helps to identify the relevant model parts.

  9. A limit-cycle model of leg movements in cross-country skiing and its adjustments with fatigue.

    PubMed

    Cignetti, F; Schena, F; Mottet, D; Rouard, A

    2010-08-01

    Using dynamical modeling tools, the aim of the study was to establish a minimal model reproducing leg movements in cross-country skiing, and to evaluate the eventual adjustments of this model with fatigue. The participants (N=8) skied on a treadmill at 90% of their maximal oxygen consumption, up to exhaustion, using the diagonal stride technique. Qualitative analysis of leg kinematics portrayed in phase planes, Hooke planes, and velocity profiles suggested the inclusion in the model of a linear stiffness and an asymmetric van der Pol-type nonlinear damping. Quantitative analysis revealed that this model reproduced the observed kinematics patterns of the leg with adequacy, accounting for 87% of the variance. A rising influence of the stiffness term and a dropping influence of the damping terms were also evidenced with fatigue. The meaning of these changes was discussed in the framework of motor control.

  10. A Note on the Item Information Function of the Four-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  11. Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar D.; Cravey, Robin L.

    2002-01-01

    A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.

  12. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1991-01-01

    A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.

  13. A novel population balance model to investigate the kinetics of in vitro cell proliferation: part II. Numerical solution, parameters' determination, and model outcomes.

    PubMed

    Fadda, Sarah; Cincotti, Alberto; Cao, Giacomo

    2012-03-01

    Based on the general theoretical model developed in Part I of this work, a series of numerical simulations related to the in vitro proliferation kinetics of adherent cells is here presented. First the complex task of assigning a specific value to all the parameters of the proposed population balance (PB) model is addressed, by also highlighting the difficulties arising when performing proper comparisons with experimental data. Then, a parametric sensitivity analysis is performed, thus identifying the more relevant parameters from a kinetics perspective. The proposed PB model can be adapted to describe cell growth under various conditions, by properly changing the value of the adjustable parameters. For this reason, model parameters able to mimic cell culture behavior under microgravity conditions are identified by means of a suitable parametric sensitivity analysis. Specifically, it is found that, as the volume growth parameter is reduced, proliferation slows down while cells arrest in G0/G1 or G2/M depending on the initial distribution of cell population. On the basis of this result, model capabilities have been tested by means of a proper comparison with literature experimental data related to the behavior of synchronized and not-synchronized cells under micro- and standard gravity levels.

  14. Mathematical models use varying parameter strategies to represent paralyzed muscle force properties: a sensitivity analysis

    PubMed Central

    Frey Law, Laura A; Shields, Richard K

    2005-01-01

    Background Mathematical muscle models may be useful for the determination of appropriate musculoskeletal stresses that will safely maintain the integrity of muscle and bone following spinal cord injury. Several models have been proposed to represent paralyzed muscle, but there have not been any systematic comparisons of modelling approaches to better understand the relationships between model parameters and muscle contractile properties. This sensitivity analysis of simulated muscle forces using three currently available mathematical models provides insight into the differences in modelling strategies as well as any direct parameter associations with simulated muscle force properties. Methods Three mathematical muscle models were compared: a traditional linear model with 3 parameters and two contemporary nonlinear models each with 6 parameters. Simulated muscle forces were calculated for two stimulation patterns (constant frequency and initial doublet trains) at three frequencies (5, 10, and 20 Hz). A sensitivity analysis of each model was performed by altering a single parameter through a range of 8 values, while the remaining parameters were kept at baseline values. Specific simulated force characteristics were determined for each stimulation pattern and each parameter increment. Significant parameter influences for each simulated force property were determined using ANOVA and Tukey's follow-up tests (α ≤ 0.05), and compared to previously reported parameter definitions. Results Each of the 3 linear model's parameters most clearly influence either simulated force magnitude or speed properties, consistent with previous parameter definitions. The nonlinear models' parameters displayed greater redundancy between force magnitude and speed properties. Further, previous parameter definitions for one of the nonlinear models were consistently supported, while the other was only partially supported by this analysis. Conclusion These three mathematical models use

  15. Modeling Complex Equilibria in ITC Experiments: Thermodynamic Parameters Estimation for a Three Binding Site Model

    PubMed Central

    Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.

    2013-01-01

    Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283

  16. Application of a parameter-estimation technique to modeling the regional aquifer underlying the eastern Snake River plain, Idaho

    USGS Publications Warehouse

    Garabedian, Stephen P.

    1986-01-01

    A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from

  17. Neural Models: An Option to Estimate Seismic Parameters of Accelerograms

    NASA Astrophysics Data System (ADS)

    Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.

    2014-12-01

    Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.

  18. Dynamic hydrologic modeling using the zero-parameter Budyko model with instantaneous dryness index

    NASA Astrophysics Data System (ADS)

    Biswal, Basudev

    2016-09-01

    Long-term partitioning of hydrologic quantities is achieved by using the zero-parameter Budyko model which defines a dryness index. However, this approach is not suitable for dynamic partitioning particularly at diminishing timescales, and therefore, a universally applicable zero-parameter model remains elusive. Here an instantaneous dryness index is proposed which enables dynamic hydrologic modeling using the Budyko model. By introducing a "decay function" that characterizes the effects of antecedent rainfall and solar energy on the dryness state of a basin at a time, I propose the concept of instantaneous dryness index and use the Budyko function to perform continuous hydrologic partitioning. Using the same decay function, I then obtain discharge time series from the effective rainfall time series. The model is evaluated by considering data form 63 U.S. Geological Survey basins. Results indicate the possibility of using the proposed framework as an alternative platform for prediction in ungagued basins.

  19. Exploring Factor Model Parameters across Continuous Variables with Local Structural Equation Models.

    PubMed

    Hildebrandt, Andrea; Lüdtke, Oliver; Robitzsch, Alexander; Sommer, Christopher; Wilhelm, Oliver

    2016-01-01

    Using an empirical data set, we investigated variation in factor model parameters across a continuous moderator variable and demonstrated three modeling approaches: multiple-group mean and covariance structure (MGMCS) analyses, local structural equation modeling (LSEM), and moderated factor analysis (MFA). We focused on how to study variation in factor model parameters as a function of continuous variables such as age, socioeconomic status, ability levels, acculturation, and so forth. Specifically, we formalized the LSEM approach in detail as compared with previous work and investigated its statistical properties with an analytical derivation and a simulation study. We also provide code for the easy implementation of LSEM. The illustration of methods was based on cross-sectional cognitive ability data from individuals ranging in age from 4 to 23 years. Variations in factor loadings across age were examined with regard to the age differentiation hypothesis. LSEM and MFA converged with respect to the conclusions. When there was a broad age range within groups and varying relations between the indicator variables and the common factor across age, MGMCS produced distorted parameter estimates. We discuss the pros of LSEM compared with MFA and recommend using the two tools as complementary approaches for investigating moderation in factor model parameters.

  20. Sample size planning for longitudinal models: accuracy in parameter estimation for polynomial change parameters.

    PubMed

    Kelley, Ken; Rausch, Joseph R

    2011-12-01

    Longitudinal studies are necessary to examine individual change over time, with group status often being an important variable in explaining some individual differences in change. Although sample size planning for longitudinal studies has focused on statistical power, recent calls for effect sizes and their corresponding confidence intervals underscore the importance of obtaining sufficiently accurate estimates of group differences in change. We derived expressions that allow researchers to plan sample size to achieve the desired confidence interval width for group differences in change for orthogonal polynomial change parameters. The approaches developed provide the expected confidence interval width to be sufficiently narrow, with an extension that allows some specified degree of assurance (e.g., 99%) that the confidence interval will be sufficiently narrow. We make computer routines freely available, so that the methods developed can be used by researchers immediately.

  1. Adaptive Detection and Parameter Estimation for Multidimensional Signal Models

    DTIC Science & Technology

    1989-04-19

    expected value of the non-adaptive parameter array estimator directly from Equation (5-1), using the fact that .zP = dppH = d We obtain EbI = (e-H E eI 1...depend only on the dimensional parameters of tlc problem. We will caerive these properties shcrLly, but first we wish to express the conditional pdf

  2. Nuclear magnetic resonance parameters of atomic xenon dissolved in Gay-Berne model liquid crystal.

    PubMed

    Lintuvuori, Juho; Straka, Michal; Vaara, Juha

    2007-03-01

    We present constant-pressure Monte Carlo simulations of nuclear magnetic resonance (NMR) spectral parameters, nuclear magnetic shielding relative to the free atom as well as nuclear quadrupole coupling, for atomic xenon dissolved in a model thermotropic liquid crystal. The solvent is described by Gay-Berne (GB) molecules with parametrization kappa=4.4, kappa{'}=20.0 , and mu=nu=1 . The reduced pressure of P{*}=2.0 is used. Previous simulations of a pure GB system with this parametrization have shown that upon lowering the temperature, the model exhibits isotropic, nematic, smectic- A , and smectic- B /molecular crystal phases. We introduce spherical xenon solutes and adjust the energy and length scales of the GB-Xe interaction to those of the GB-GB interaction. This is done through first principles quantum chemical calculations carried out for a dimer of model mesogens as well as the mesogen-xenon complex. We preparametrize quantum chemically the Xe nuclear shielding and quadrupole coupling tensors when interacting with the model mesogen, and use the parametrization in a pairwise additive fashion in the analysis of the simulation. We present the temperature evolution of {129/131}Xe shielding and 131Xe quadrupole coupling in the different phases of the GB model. From the simulations, separate isotropic and anisotropic contributions to the experimentally available total shielding can be obtained. At the experimentally relevant concentration, the presence of the xenon atoms does not significantly affect the phase behavior as compared to the pure GB model. The simulations reproduce many of the characteristic experimental features of Xe NMR in real thermotropic LCs: Discontinuity in the value or trends of the shielding and quadrupole coupling at the nematic-isotropic and smectic-A-nematic phase transitions, nonlinear shift evolution in the nematic phase reflecting the behavior of the orientational order parameter, and decreasing shift in the smectic-A phase. The last

  3. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  4. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    NASA Astrophysics Data System (ADS)

    Bastidas, Luis A.; Knighton, James; Kline, Shaun W.

    2016-09-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  5. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Knighton, J.; Kline, S. W.

    2015-10-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  6. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.

  7. Multi-Scale 7DOF View Adjustment.

    PubMed

    Cho, Isaac; Li, Jialei; Wartell, Zachary

    2017-02-13

    Multi-scale virtual environments contain geometric details ranging over several orders of magnitude and typically employ out-of-core rendering techniques. When displayed in virtual reality systems this entails using a 7 degree-of-freedom (DOF) view model where view scale is a separate 7th DOF in addition to 6DOF view pose. Dynamic adjustment of this and other view parameters become very important to usability. In this paper, we evaluate how two adjustment techniques interact with uni- and bi-manual 7 degree-of-freedom navigation in DesktopVR and a CAVE. The travel task has two stages, an initial targeted zoom and a detailed geometric inspection. The results show benefits of the auto-adjustments on completion time and stereo fusion issues, but only in certain circumstances. Peculiar view configuration examples show the difficulty of creating robust adjustment rules.

  8. A Paradox between IRT Invariance and Model-Data Fit When Utilizing the One-Parameter and Three-Parameter Models

    ERIC Educational Resources Information Center

    Custer, Michael; Sharairi, Sid; Yamazaki, Kenji; Signatur, Diane; Swift, David; Frey, Sharon

    2008-01-01

    The present study compared item and ability invariance as well as model-data fit between the one-parameter (1PL) and three-parameter (3PL) Item Response Theory (IRT) models utilizing real data across five grades; second through sixth as well as simulated data at second, fourth and sixth grade. At each grade, the 1PL and 3PL IRT models were run…

  9. Testing Momentum Enhancement of Ribbon Fin Based Propulsion Using a Robotic Model With an Adjustable Body

    NASA Astrophysics Data System (ADS)

    English, Ian; Curet, Oscar

    2016-11-01

    Lighthill and Blake's 1990 momentum enhancement theory suggests there is a multiplicative propulsive effect linked to the ratio of body and fin heights in Gymnotiform and Balistiform swimmers, which propel themselves using multi-rayed undulating fins while keeping their bodies mostly rigid. Proof of such a momentum enhancement could have a profound effect on unmanned underwater vehicle design and shed light on the evolutionary advantage to body-fin ratios found in nature, shown as optimal for momentum enhancement in Lighthill and Blake's theory. A robotic ribbon fin with twelve independent fin rays, elastic fin membrane, and a body of adjustable height was developed specifically to experimentally test momentum enhancement. Thrust tests for various body heights were conducted in a recirculating flow tank at different flow speeds and fin flapping frequencies. When comparing thrust at different body heights, flow speeds, and frequencies to a 'no-body' thrust test case at each frequency and flow speed, data indicate there is no momentum enhancement factor due to the presence of a body on top of an undulating fin. This suggests that if there is a benefit to a specific ratio between body and fin height, it is not due to momentum enhancement.

  10. Mapping disability-adjusted life years: a Bayesian hierarchical model framework for burden of disease and injury assessment.

    PubMed

    MacNab, Ying C

    2007-11-20

    This paper presents a Bayesian disability-adjusted life year (DALY) methodology for spatial and spatiotemporal analyses of disease and/or injury burden. A Bayesian disease mapping model framework, which blends together spatial modelling, shared-component modelling (SCM), temporal modelling, ecological modelling, and non-linear modelling, is developed for small-area DALY estimation and inference. In particular, we develop a model framework that enables SCM as well as multivariate CAR modelling of non-fatal and fatal disease or injury rates and facilitates spline smoothing for non-linear modelling of temporal rate and risk trends. Using British Columbia (Canada) hospital admission-separation data and vital statistics mortality data on non-fatal and fatal road traffic injuries to male population age 20-39 for year 1991-2000 and for 84 local health areas and 16 health service delivery areas, spatial and spatiotemporal estimation and inference on years of life lost due to premature death, years lived with disability, and DALYs are presented. Fully Bayesian estimation and inference, with Markov chain Monte Carlo implementation, are illustrated. We present a methodological framework within which the DALY and the Bayesian disease mapping methodologies interface and intersect. Its development brings the relative importance of premature mortality and disability into the assessment of community health and health needs in order to provide reliable information and evidence for community-based public health surveillance and evaluation, disease and injury prevention, and resource provision.

  11. Finding Parameters by Tabu Search Algorithm to Construct a Coupled Heat and Mass Transfer Model for Green Roof

    NASA Astrophysics Data System (ADS)

    Chen, P.; Tung, C.

    2012-12-01

    Green roof has the advantage to lower building temperature; therefore it has been applied a lot nowadays to indoor temperature adjustment. This study builds a coupled heat and mass transfer model, in which the water vapor in the substrate is taken into consideration, based on the concept of energy balance. With the parameters optimized by Tabu search algorithm, data from the experiment is used to validate the model. In the study, both the model and the experimental green roof of this study consist of four layers: canopy, substrate, drainage and concrete rooftop. Heat flux of each layer is calculated in the model, using energy balance equations as well as some numerical methods to simulate water-related thermal effect in soil, to see the heat transfer process. The experiment site locates on the rooftop of Hydrotech Research Institute, National Taiwan University, Taiwan. Since the material of the substrate layer has high porosity, the results show a contradiction of energy conservation when neglecting the influence of water. It is found that the parameters identified by Tabu search seem reasonable for the experiment. The main contribution of the study is to construct a thermal model for green roof with parameter optimization procedure, which can be used as an effective assessment method to quantify the heat-reduced performance of green roof on the underlying building.

  12. Correction of biased climate simulated by biased physics through parameter estimation in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Xuefeng; Zhang, Shaoqing; Liu, Zhengyu; Wu, Xinrong; Han, Guijun

    2016-09-01

    Imperfect physical parameterization schemes are an important source of model bias in a coupled model and adversely impact the performance of model simulation. With a coupled ocean-atmosphere-land model of intermediate complexity, the impact of imperfect parameter estimation on model simulation with biased physics has been studied. Here, the biased physics is induced by using different outgoing longwave radiation schemes in the assimilation and "truth" models. To mitigate model bias, the parameters employed in the biased longwave radiation scheme are optimized using three different methods: least-squares parameter fitting (LSPF), single-valued parameter estimation and geography-dependent parameter optimization (GPO), the last two of which belong to the coupled model parameter estimation (CMPE) method. While the traditional LSPF method is able to improve the performance of coupled model simulations, the optimized parameter values from the CMPE, which uses the coupled model dynamics to project observational information onto the parameters, further reduce the bias of the simulated climate arising from biased physics. Further, parameters estimated by the GPO method can properly capture the climate-scale signal to improve the simulation of climate variability. These results suggest that the physical parameter estimation via the CMPE scheme is an effective approach to restrain the model climate drift during decadal climate predictions using coupled general circulation models.

  13. Kinetic modeling of molecular motors: pause model and parameter determination from single-molecule experiments

    NASA Astrophysics Data System (ADS)

    Morin, José A.; Ibarra, Borja; Cao, Francisco J.

    2016-05-01

    Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model

  14. Dynamic Modeling of Adjustable-Speed Pumped Storage Hydropower Plant: Preprint

    SciTech Connect

    Muljadi, E.; Singh, M.; Gevorgian, V.; Mohanpurkar, M.; Havsapian, R.; Koritarov, V.

    2015-04-06

    Hydropower is the largest producer of renewable energy in the U.S. More than 60% of the total renewable generation comes from hydropower. There is also approximately 22 GW of pumped storage hydropower (PSH). Conventional PSH uses a synchronous generator, and thus the rotational speed is constant at synchronous speed. This work details a hydrodynamic model and generator/power converter dynamic model. The optimization of the hydrodynamic model is executed by the hydro-turbine controller, and the electrical output real/reactive power is controlled by the power converter. All essential controllers to perform grid-interface functions and provide ancillary services are included in the model.

  15. A Primer on the 2- and 3-Parameter Item Response Theory Models.

    ERIC Educational Resources Information Center

    Thornton, Artist

    Item response theory (IRT) is a useful and effective tool for item response measurement if used in the proper context. This paper discusses the sets of assumptions under which responses can be modeled while exploring the framework of the IRT models relative to response testing. The one parameter model, or one parameter logistic model, is perhaps…

  16. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    NASA Technical Reports Server (NTRS)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  17. Parameter identification for the electrical modeling of semiconductor bridges.

    SciTech Connect

    Gray, Genetha Anne

    2005-03-01

    Semiconductor bridges (SCBs) are commonly used as initiators for explosive and pyrotechnic devices. Their advantages include reduced voltage and energy requirements and exceptional safety features. Moreover, the design of systems which implement SCBs can be expedited using electrical simulation software. Successful use of this software requires that certain parameters be correctly chosen. In this paper, we explain how these parameters can be identified using optimization. We describe the problem focusing on the application of a direct optimization method for its solution, and present some numerical results.

  18. Ranking vocal fold model parameters by their influence on modal frequencies

    PubMed Central

    Cook, Douglas D.; Nauman, Eric; Mongeau, Luc

    2009-01-01

    The purpose of this study was to identify, using computational models, the vocal fold parameters which are most influential in determining the vibratory characteristics of the vocal folds. The sensitivities of vocal folds modal frequencies to variations model parameters were used to determine the most influential parameters. A detailed finite element model of the human vocal fold was created. The model was defined by eight geometric and six material parameters. The model included transitional boundary regions to idealize the complex physiological structure of real human subjects. Parameters were simultaneously varied over ranges representative of actual human vocal folds. Three separate statistical analysis techniques were used to identify the most and least sensitive model parameters with respect to modal frequency. The results from all three methods consistently suggest that a set of five parameters are most influential in determining the vibratory characteristics of the vocal folds. PMID:19813811

  19. A modified inverse procedure to calibrate parameters of land subsidence model and its field application in shanghai

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Ye, S.; Wu, J.

    2012-12-01

    Land subsidence prediction depends on a theoretical model and values of involved parameters. An advanced inverse model (called for InvCOMPAC) has been developed for calibrating five parameters (K'v, S'skv, S'ske, S'sk and p'max0) in a compacting confined aquifer system (Liu and Helm, 2008). InvCOMPAC combines two steps: graphic analysis for initial set of parameters and Newton-Raphson method for adjustment procedure. Both steps were modified to improve the procedure involved, so the modified inverse model presented in this study was called MInvCOMPAC. Some improved procedures were introduced in both steps. For graphic analysis, smoothed data was used to get regular smooth hydrographs, compaction history curves and stress-strain curves, which could reduce the uncertainties for initial values of parameters. For Newton-Raphson method, three procedures were introduced to improve robustness and practicability of Newton-Raphson method. First procedure is to remove outliers from actual data by using smoothed data. This procedure can reduce risk of failure of numerical optimization. Second procedure is to use modified Newton method instead of Newton-Raphson, which can improve robustness of numerical optimization. Third procedure is to develop variable scale factor in modified Newton method instead of fixed scale factor. This procedure can simplify the modified Newton method. As a case study, MInvCOMPAC was applied to obtain values of five parameters mentioned above for Shanghai land subsidence model. Results of MInvCOMPAC were evaluated by the fit of observed and calculated data, and also compared to those of InvCOMPAC. Reference: Liu Y, Helm DC. Inverse procedure for calibrating parameters that control land subsidence caused by subsurface fluid withdrawal: 2. Field application. Water Resour Res. 2008 Jul 31;44(7). Acknowledgements Funding for this research from 973 Program No. 2010CB428803, and from NSFC No. 40872155, 40725010 and 41030746, is gratefully acknowledged.

  20. The Analysis of Repeated Measurements with Mixed-Model Adjusted "F" Tests

    ERIC Educational Resources Information Center

    Kowalchuk, Rhonda K.; Keselman, H. J.; Algina, James; Wolfinger, Russell D.

    2004-01-01

    One approach to the analysis of repeated measures data allows researchers to model the covariance structure of their data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach, available through SAS PROC MIXED, was compared to a Welch-James type statistic.…

  1. The timing of the Black Sea flood event: Insights from modeling of glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Goldberg, Samuel L.; Lau, Harriet C. P.; Mitrovica, Jerry X.; Latychev, Konstantin

    2016-10-01

    We present a suite of gravitationally self-consistent predictions of sea-level change since Last Glacial Maximum (LGM) in the vicinity of the Bosphorus and Dardanelles straits that combine signals associated with glacial isostatic adjustment (GIA) and the flooding of the Black Sea. Our predictions are tuned to fit a relative sea level (RSL) record at the island of Samothrace in the north Aegean Sea and they include realistic 3-D variations in viscoelastic structure, including lateral variations in mantle viscosity and the elastic thickness of the lithosphere, as well as weak plate boundary zones. We demonstrate that 3-D Earth structure and the magnitude of the flood event (which depends on the pre-flood level of the lake) both have significant impact on the predicted RSL change at the location of the Bosphorus sill, and therefore on the inferred timing of the marine incursion. We summarize our results in a plot showing the predicted RSL change at the Bosphorus sill as a function of the timing of the flood event for different flood magnitudes up to 100 m. These results suggest, for example, that a flood event at 9 ka implies that the elevation of the sill was lowered through erosion by ∼14-21 m during, and after, the flood. In contrast, a flood event at 7 ka suggests erosion of ∼24-31 m at the sill since the flood. More generally, our results will be useful for future research aimed at constraining the details of this controversial, and widely debated geological event.

  2. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  3. Distributed parameter modelling of flexible spacecraft: Where's the beef?

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1994-01-01

    This presentation discusses various misgivings concerning the directions and productivity of Distributed Parameter System (DPS) theory as applied to spacecraft vibration control. We try to show the need for greater cross-fertilization between DPS theorists and spacecraft control designers. We recommend a shift in research directions toward exploration of asymptotic frequency response characteristics of critical importance to control designers.

  4. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  5. Determination of the Parameter Sets for the Best Performance of IPS-driven ENLIL Model

    NASA Astrophysics Data System (ADS)

    Yun, Jongyeon; Choi, Kyu-Cheol; Yi, Jonghyuk; Kim, Jaehun; Odstrcil, Dusan

    2016-12-01

    Interplanetary scintillation-driven (IPS-driven) ENLIL model was jointly developed by University of California, San Diego (UCSD) and National Aeronaucics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The model has been in operation by Korean Space Weather Cetner (KSWC) since 2014. IPS-driven ENLIL model has a variety of ambient solar wind parameters and the results of the model depend on the combination of these parameters. We have conducted researches to determine the best combination of parameters to improve the performance of the IPS-driven ENLIL model. The model results with input of 1,440 combinations of parameters are compared with the Advanced Composition Explorer (ACE) observation data. In this way, the top 10 parameter sets showing best performance were determined. Finally, the characteristics of the parameter sets were analyzed and application of the results to IPS-driven ENLIL model was discussed.

  6. Using Dirichlet Priors to Improve Model Parameter Plausibility

    ERIC Educational Resources Information Center

    Rai, Dovan; Gong, Yue; Beck, Joseph E.

    2009-01-01

    Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…

  7. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    NASA Astrophysics Data System (ADS)

    Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.

    2016-07-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( < 0.063 mm) ash (3-59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  8. Modeling Subducting Slabs: Structural Variations due to Thermal Models, Latent Heat Feedback, and Thermal Parameter

    NASA Astrophysics Data System (ADS)

    Marton, F. C.

    2001-12-01

    The thermal, mineralogical, and buoyancy structures of thermal-kinetic models of subducting slabs are highly dependent upon a number of parameters, especially if the metastable persistence of olivine in the transition zone is investigated. The choice of starting thermal model for the lithosphere, whether a cooling halfspace (HS) or plate model, can have a significant effect, resulting in metastable wedges of olivine that differ in size by up to two to three times for high values of the thermal parameter (ǎrphi). Moreover, as ǎrphi is the product of the age of the lithosphere at the trench, convergence rate, and dip angle, slabs with similar ǎrphis can show great variations in structures as these constituents change. This is especially true for old lithosphere, as the lithosphere continually cools and thickens with age for HS models, but plate models, with parameters from Parson and Sclater [1977] (PS) or Stein and Stein [1992] (GDH1), achieve a thermal steady-state and constant thickness in about 70 My. In addition, the latent heats (q) of the phase transformations of the Mg2SiO4 polymorphs can also have significant effects in the slabs. Including q feedback in models raises the temperature and reduces the extent of metastable olivine, causing the sizes of the metastable wedges to vary by factors of up to two times. The effects of the choice of thermal model, inclusion and non-inclusion of q feedback, and variations in the constituents of ǎrphi are investigated for several model slabs.

  9. A lumped parameter model of the polymer electrolyte fuel cell

    NASA Astrophysics Data System (ADS)

    Chu, Keonyup; Ryu, Junghwan; Sunwoo, Myoungho

    A model of a polymer electrolyte fuel cell (PEFC) is developed that captures dynamic behaviour for control purposes. The model is mathematically simple, but accounts for the essential phenomena that define PEFC performance. In particular, performance depends principally on humidity, temperature and gas pressure in the fuel cell system. To simulate accurately PEFC operation, the effects of water transport, hydration in the membrane, temperature, and mass transport in the fuel cells system are simultaneously coupled in the model. The PEFC model address three physically distinctive fuel cell components, namely, the anode channel, the cathode channel, and the membrane electrode assembly (MEA). The laws of mass and energy conservation are applied to describe each physical component as a control volume. In addition, the MEA model includes a steady-state electrochemical model, which consists of membrane hydration and the stack voltage models.

  10. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  11. Macroscopic control parameter for avalanche models for bursty transport

    SciTech Connect

    Chapman, S. C.; Rowlands, G.; Watkins, N. W.

    2009-01-15

    Similarity analysis is used to identify the control parameter R{sub A} for the subset of avalanching systems that can exhibit self-organized criticality (SOC). This parameter expresses the ratio of driving to dissipation. The transition to SOC, when the number of excited degrees of freedom is maximal, is found to occur when R{sub A}{yields}0. This is in the opposite sense to (Kolmogorov) turbulence, thus identifying a deep distinction between turbulence and SOC and suggesting an observable property that could distinguish them. A corollary of this similarity analysis is that SOC phenomenology, that is, power law scaling of avalanches, can persist for finite R{sub A} with the same R{sub A}{yields}0 exponent if the system supports a sufficiently large range of lengthscales, necessary for SOC to be a candidate for physical (R{sub A} finite) systems.

  12. An Integrated Tool for Estimation of Material Model Parameters (PREPRINT)

    DTIC Science & Technology

    2010-04-01

    irrevocable worldwide license to use, modify, reproduce, release, perform, display, or disclose the work by or on behalf of the U.S. Government. 14 ... vf , and wf. The filtered v profiles are shown in Figure 4. For the plastic deformation data we found that the filtering could not correct the...wf near the top right corner. We need to use the vf data for our parameter estimation. Since the geometry and loading are symmetric in the FEM

  13. On Lower Confidence for PCS in Truncated Location Parameter Models

    DTIC Science & Technology

    1989-06-01

    statistic for the parameter 9i. The natural selection rule is to select the population yielding the largest Xi as the best population. Thus, a question ...group. Then, a reasonable question is: what kind of confidence statement can be made regarding the PCS? For this purpose, based on the above given data...Institute of Statistics Purdue University National Central University West Lafayette, IN, USA Chung-Li, Taiwan, R.O.C. TaChen Liang Department of Mathematics

  14. Shaft adjuster

    DOEpatents

    Harry, H.H.

    1988-03-11

    Abstract and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus. 3 figs.

  15. Shaft adjuster

    DOEpatents

    Harry, Herbert H.

    1989-01-01

    Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

  16. Parameter Estimation for Differential Equation Models Using a Framework of Measurement Error in Regression Models.

    PubMed

    Liang, Hua; Wu, Hulin

    2008-12-01

    Differential equation (DE) models are widely used in many scientific fields that include engineering, physics and biomedical sciences. The so-called "forward problem", the problem of simulations and predictions of state variables for given parameter values in the DE models, has been extensively studied by mathematicians, physicists, engineers and other scientists. However, the "inverse problem", the problem of parameter estimation based on the measurements of output variables, has not been well explored using modern statistical methods, although some least squares-based approaches have been proposed and studied. In this paper, we propose parameter estimation methods for ordinary differential equation models (ODE) based on the local smoothing approach and a pseudo-least squares (PsLS) principle under a framework of measurement error in regression models. The asymptotic properties of the proposed PsLS estimator are established. We also compare the PsLS method to the corresponding SIMEX method and evaluate their finite sample performances via simulation studies. We illustrate the proposed approach using an application example from an HIV dynamic study.

  17. Ultra-wideband, Wide Angle and Polarization-insensitive Specular Reflection Reduction by Metasurface based on Parameter-adjustable Meta-Atoms

    NASA Astrophysics Data System (ADS)

    Su, Jianxun; Lu, Yao; Zhang, Hui; Li, Zengrui; (Lamar) Yang, Yaoqing; Che, Yongxing; Qi, Kainan

    2017-02-01

    In this paper, an ultra-wideband, wide angle and polarization-insensitive metasurface is designed, fabricated, and characterized for suppressing the specular electromagnetic wave reflection or backward radar cross section (RCS). Square ring structure is chosen as the basic meta-atoms. A new physical mechanism based on size adjustment of the basic meta-atoms is proposed for ultra-wideband manipulation of electromagnetic (EM) waves. Based on hybrid array pattern synthesis (APS) and particle swarm optimization (PSO) algorithm, the selection and distribution of the basic meta-atoms are optimized simultaneously to obtain the ultra-wideband diffusion scattering patterns. The metasurface can achieve an excellent RCS reduction in an ultra-wide frequency range under x- and y-polarized normal incidences. The new proposed mechanism greatly extends the bandwidth of RCS reduction. The simulation and experiment results show the metasurface can achieve ultra-wideband and polarization-insensitive specular reflection reduction for both normal and wide-angle incidences. The proposed methodology opens up a new route for realizing ultra-wideband diffusion scattering of EM wave, which is important for stealth and other microwave applications in the future.

  18. Ultra-wideband, Wide Angle and Polarization-insensitive Specular Reflection Reduction by Metasurface based on Parameter-adjustable Meta-Atoms

    PubMed Central

    Su, Jianxun; Lu, Yao; Zhang, Hui; Li, Zengrui; (Lamar) Yang, Yaoqing; Che, Yongxing; Qi, Kainan

    2017-01-01

    In this paper, an ultra-wideband, wide angle and polarization-insensitive metasurface is designed, fabricated, and characterized for suppressing the specular electromagnetic wave reflection or backward radar cross section (RCS). Square ring structure is chosen as the basic meta-atoms. A new physical mechanism based on size adjustment of the basic meta-atoms is proposed for ultra-wideband manipulation of electromagnetic (EM) waves. Based on hybrid array pattern synthesis (APS) and particle swarm optimization (PSO) algorithm, the selection and distribution of the basic meta-atoms are optimized simultaneously to obtain the ultra-wideband diffusion scattering patterns. The metasurface can achieve an excellent RCS reduction in an ultra-wide frequency range under x- and y-polarized normal incidences. The new proposed mechanism greatly extends the bandwidth of RCS reduction. The simulation and experiment results show the metasurface can achieve ultra-wideband and polarization-insensitive specular reflection reduction for both normal and wide-angle incidences. The proposed methodology opens up a new route for realizing ultra-wideband diffusion scattering of EM wave, which is important for stealth and other microwave applications in the future. PMID:28181593

  19. Ultra-wideband, Wide Angle and Polarization-insensitive Specular Reflection Reduction by Metasurface based on Parameter-adjustable Meta-Atoms.

    PubMed

    Su, Jianxun; Lu, Yao; Zhang, Hui; Li, Zengrui; Lamar Yang, Yaoqing; Che, Yongxing; Qi, Kainan

    2017-02-09

    In this paper, an ultra-wideband, wide angle and polarization-insensitive metasurface is designed, fabricated, and characterized for suppressing the specular electromagnetic wave reflection or backward radar cross section (RCS). Square ring structure is chosen as the basic meta-atoms. A new physical mechanism based on size adjustment of the basic meta-atoms is proposed for ultra-wideband manipulation of electromagnetic (EM) waves. Based on hybrid array pattern synthesis (APS) and particle swarm optimization (PSO) algorithm, the selection and distribution of the basic meta-atoms are optimized simultaneously to obtain the ultra-wideband diffusion scattering patterns. The metasurface can achieve an excellent RCS reduction in an ultra-wide frequency range under x- and y-polarized normal incidences. The new proposed mechanism greatly extends the bandwidth of RCS reduction. The simulation and experiment results show the metasurface can achieve ultra-wideband and polarization-insensitive specular reflection reduction for both normal and wide-angle incidences. The proposed methodology opens up a new route for realizing ultra-wideband diffusion scattering of EM wave, which is important for stealth and other microwave applications in the future.

  20. Citizens' Perceptions of Flood Hazard Adjustments: An Application of the Protective Action Decision Model

    ERIC Educational Resources Information Center

    Terpstra, Teun; Lindell, Michael K.

    2013-01-01

    Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…

  1. Preserving Heterogeneity and Consistency in Hydrological Model Inversions by Adjusting Pedotransfer Functions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Numerical modeling is the dominant method for quantifying water flow and the transport of dissolved constituents in surface soils as well as the deeper vadose zone. While the fundamental laws that govern the mechanics of the flow processes in terms of Richards' and convection-dispersion equations a...

  2. A Unified Model Exploring Parenting Practices as Mediators of Marital Conflict and Children's Adjustment

    ERIC Educational Resources Information Center

    Coln, Kristen L.; Jordan, Sara S.; Mercer, Sterett H.

    2013-01-01

    We examined positive and negative parenting practices and psychological control as mediators of the relations between constructive and destructive marital conflict and children's internalizing and externalizing problems in a unified model. Married mothers of 121 children between the ages of 6 and 12 completed questionnaires measuring marital…

  3. A Gender-Moderated Model of Family Relationships and Adolescent Adjustment

    ERIC Educational Resources Information Center

    Elizur, Yoel; Spivak, Amos; Ofran, Shlomit; Jacobs, Shira

    2007-01-01

    The objective of this study was to explain why adolescent girls with conduct problems (CP) are more at risk than boys to develop emotional distress (ED) in a sample composed of Israeli-born and immigrant youth from Ethiopia and the former Soviet Union (n = 305, ages 14-18). We tested a structural equation model and found a very good fit to the…

  4. Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys

    DTIC Science & Technology

    2012-01-01

    strength 7075-T651aluminum alloy . Johnson - Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration...structural components made of high strength 7075-T651aluminum alloy . Johnson - Cook model constants determined for Al7075-T651 alloy bar material...rate sensitivity, Johnson - Cook , constitutive model. PACS: 62.20 .Dc, 62.20..Fe, S 62.50. +p, 83.60.La INTRODUCTION Aluminum 7075 alloys are

  5. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    USGS Publications Warehouse

    Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.

    2016-01-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between  ∼  2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( <  0.063 mm) ash (3–59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  6. Atomic modeling of cryo-electron microscopy reconstructions – Joint refinement of model and imaging parameters

    PubMed Central

    Chapman, Michael S.; Trzynka, Andrew; Chapman, Brynmor K.

    2013-01-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5–2.5 Å at resolutions of 4.5–6 Å. PMID:23376441

  7. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    PubMed

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å.

  8. Tests for Regression Parameters in Power Transformation Models.

    DTIC Science & Technology

    1980-01-01

    of estimating the correct %.JI.J scale and then performing the usual linear model F-test in this estimated Ascale. We explore situations in which this...transformation model. In this model, a simple test consists of estimating the correct scale and t ihv. performin g the usutal l iiear model F-test in ’ this...X (yi,y ) will be the least squares estimaites in the estimated scale X and -(yiY2) will be the least squares estimates calculated in the true but

  9. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  10. A General Approach for Specifying Informative Prior Distributions for PBPK Model Parameters

    EPA Science Inventory

    Characterization of uncertainty in model predictions is receiving more interest as more models are being used in applications that are critical to human health. For models in which parameters reflect biological characteristics, it is often possible to provide estimates of paramet...

  11. Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…

  12. Identification of the 1PL Model with Guessing Parameter: Parametric and Semi-Parametric Results

    ERIC Educational Resources Information Center

    San Martin, Ernesto; Rolin, Jean-Marie; Castro, Luis M.

    2013-01-01

    In this paper, we study the identification of a particular case of the 3PL model, namely when the discrimination parameters are all constant and equal to 1. We term this model, 1PL-G model. The identification analysis is performed under three different specifications. The first specification considers the abilities as unknown parameters. It is…