Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C
2011-12-01
Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment
NASA Technical Reports Server (NTRS)
Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.;
2013-01-01
Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.
NASA Astrophysics Data System (ADS)
Yamana, Teresa K.; Eltahir, Elfatih A. B.
2011-02-01
This paper describes the use of satellite-based estimates of rainfall to force the Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS), a hydrology-based mechanistic model of malaria transmission. We first examined the temporal resolution of rainfall input required by HYDREMATS. Simulations conducted over Banizoumbou village in Niger showed that for reasonably accurate simulation of mosquito populations, the model requires rainfall data with at least 1 h resolution. We then investigated whether HYDREMATS could be effectively forced by satellite-based estimates of rainfall instead of ground-based observations. The Climate Prediction Center morphing technique (CMORPH) precipitation estimates distributed by the National Oceanic and Atmospheric Administration are available at a 30 min temporal resolution and 8 km spatial resolution. We compared mosquito populations simulated by HYDREMATS when the model is forced by adjusted CMORPH estimates and by ground observations. The results demonstrate that adjusted rainfall estimates from satellites can be used with a mechanistic model to accurately simulate the dynamics of mosquito populations.
Modeling of Army Research Laboratory EMP simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miletta, J.R.; Chase, R.J.; Luu, B.B.
1993-12-01
Models are required that permit the estimation of emitted field signatures from EMP simulators to design the simulator antenna structure, to establish the usable test volumes, and to estimate human exposure risk. This paper presents the capabilities and limitations of a variety of EMP simulator models useful to the Army's EMP survivability programs. Comparisons among frequency and time-domain models are provided for two powerful US Army Research Laboratory EMP simulators: AESOP (Army EMP Simulator Operations) and VEMPS II (Vertical EMP Simulator II).
A simulation of water pollution model parameter estimation
NASA Technical Reports Server (NTRS)
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
NASA Technical Reports Server (NTRS)
Kibler, J. F.; Suttles, J. T.
1977-01-01
One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.
Potential effects of climate change on ground water in Lansing, Michigan
Croley, T.E.; Luukkonen, C.L.
2003-01-01
Computer simulations involving general circulation models, a hydrologic modeling system, and a ground water flow model indicate potential impacts of selected climate change projections on ground water levels in the Lansing, Michigan, area. General circulation models developed by the Canadian Climate Centre and the Hadley Centre generated meteorology estimates for 1961 through 1990 (as a reference condition) and for the 20 years centered on 2030 (as a changed climate condition). Using these meteorology estimates, the Great Lakes Environmental Research Laboratory's hydrologic modeling system produced corresponding period streamflow simulations. Ground water recharge was estimated from the streamflow simulations and from variables derived from the general circulation models. The U.S. Geological Survey developed a numerical ground water flow model of the Saginaw and glacial aquifers in the Tri-County region surrounding Lansing, Michigan. Model simulations, using the ground water recharge estimates, indicate changes in ground water levels. Within the Lansing area, simulated ground water levels in the Saginaw aquifer declined under the Canadian predictions and increased under the Hadley.
Nagasaki, Masao; Yamaguchi, Rui; Yoshida, Ryo; Imoto, Seiya; Doi, Atsushi; Tamada, Yoshinori; Matsuno, Hiroshi; Miyano, Satoru; Higuchi, Tomoyuki
2006-01-01
We propose an automatic construction method of the hybrid functional Petri net as a simulation model of biological pathways. The problems we consider are how we choose the values of parameters and how we set the network structure. Usually, we tune these unknown factors empirically so that the simulation results are consistent with biological knowledge. Obviously, this approach has the limitation in the size of network of interest. To extend the capability of the simulation model, we propose the use of data assimilation approach that was originally established in the field of geophysical simulation science. We provide genomic data assimilation framework that establishes a link between our simulation model and observed data like microarray gene expression data by using a nonlinear state space model. A key idea of our genomic data assimilation is that the unknown parameters in simulation model are converted as the parameter of the state space model and the estimates are obtained as the maximum a posteriori estimators. In the parameter estimation process, the simulation model is used to generate the system model in the state space model. Such a formulation enables us to handle both the model construction and the parameter tuning within a framework of the Bayesian statistical inferences. In particular, the Bayesian approach provides us a way of controlling overfitting during the parameter estimations that is essential for constructing a reliable biological pathway. We demonstrate the effectiveness of our approach using synthetic data. As a result, parameter estimation using genomic data assimilation works very well and the network structure is suitably selected.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S
2015-01-01
Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.
NASA Technical Reports Server (NTRS)
Davis, John H.
1993-01-01
Lunar spherical harmonic gravity coefficients are estimated from simulated observations of a near-circular low altitude polar orbiter disturbed by lunar mascons. Lunar gravity sensing missions using earth-based nearside observations with and without satellite-based far-side observations are simulated and least squares maximum likelihood estimates are developed for spherical harmonic expansion fit models. Simulations and parameter estimations are performed by a modified version of the Smithsonian Astrophysical Observatory's Planetary Ephemeris Program. Two different lunar spacecraft mission phases are simulated to evaluate the estimated fit models. Results for predicting state covariances one orbit ahead are presented along with the state errors resulting from the mismodeled gravity field. The position errors from planning a lunar landing maneuver with a mismodeled gravity field are also presented. These simulations clearly demonstrate the need to include observations of satellite motion over the far side in estimating the lunar gravity field. The simulations also illustrate that the eighth degree and order expansions used in the simulated fits were unable to adequately model lunar mascons.
A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, Laura; Jakob, Christian; Cheung, K.
2013-06-27
Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less
[Application of ordinary Kriging method in entomologic ecology].
Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong
2003-01-01
Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.
Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.
2017-05-04
The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle tracking is anticipated to evaluate if these model design considerations are similarly important for understanding the primary modeling objective - to simulate reasonable groundwater age distributions.
Internal Interdecadal Variability in CMIP5 Control Simulations
NASA Astrophysics Data System (ADS)
Cheung, A. H.; Mann, M. E.; Frankcombe, L. M.; England, M. H.; Steinman, B. A.; Miller, S. K.
2015-12-01
Here we make use of control simulations from the CMIP5 models to quantify the amplitude of the interdecadal internal variability component in Atlantic, Pacific, and Northern Hemisphere mean surface temperature. We compare against estimates derived from observations using a semi-empirical approach wherein the forced component as estimated using CMIP5 historical simulations is removed to yield an estimate of the residual, internal variability. While the observational estimates are largely consistent with those derived from the control simulations for both basins and the Northern Hemisphere, they lie in the upper range of the model distributions, suggesting the possibility of differences between the amplitudes of observed and modeled variability. We comment on some possible reasons for the disparity.
Estimating solar radiation for plant simulation models
NASA Technical Reports Server (NTRS)
Hodges, T.; French, V.; Leduc, S.
1985-01-01
Five algorithms producing daily solar radiation surrogates using daily temperatures and rainfall were evaluated using measured solar radiation data for seven U.S. locations. The algorithms were compared both in terms of accuracy of daily solar radiation estimates and terms of response when used in a plant growth simulation model (CERES-wheat). Requirements for accuracy of solar radiation for plant growth simulation models are discussed. One algorithm is recommended as being best suited for use in these models when neither measured nor satellite estimated solar radiation values are available.
Attitude Estimation for Unresolved Agile Space Objects with Shape Model Uncertainty
2012-09-01
Simulated lightcurve data using the Cook-Torrance [8] Bidirectional Reflectivity Distribution Function ( BRDF ) model was first applied in a batch estimation...framework to ellipsoidal SO models in geostationary orbits [9]. The Ashikhmin-Shirley [10] BRDF has also been used to study estimation of specular...non-convex 300 facet model and simulated lightcurves using a combination of Lambertian and Cook-Torrance (specular) BRDF models with an Unscented
NASA Technical Reports Server (NTRS)
Madden, Michael G.; Wyrick, Roberta; O'Neill, Dale E.
2005-01-01
Space Shuttle Processing is a complicated and highly variable project. The planning and scheduling problem, categorized as a Resource Constrained - Stochastic Project Scheduling Problem (RC-SPSP), has a great deal of variability in the Orbiter Processing Facility (OPF) process flow from one flight to the next. Simulation Modeling is a useful tool in estimation of the makespan of the overall process. However, simulation requires a model to be developed, which itself is a labor and time consuming effort. With such a dynamic process, often the model would potentially be out of synchronization with the actual process, limiting the applicability of the simulation answers in solving the actual estimation problem. Integration of TEAMS model enabling software with our existing schedule program software is the basis of our solution. This paper explains the approach used to develop an auto-generated simulation model from planning and schedule efforts and available data.
Using Landsat to provide potato production estimates to Columbia Basin farmers and processors
NASA Technical Reports Server (NTRS)
1990-01-01
A summary of project activities relative to the estimation of potato yields in the Columbia Basin is given. Oregon State University is using a two-pronged approach to yield estimation, one using simulation models and the other using purely empirical models. The simulation modeling approach has used satellite observations to determine key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Two empirical modeling approaches are illustrated. One relates tuber yield to estimates of cumulative intercepted solar radiation; the other relates tuber yield to the integral under the GVI curve.
On the Nature of SEM Estimates of ARMA Parameters.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Cheevatanarak, Suchittra
Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…
NASA Astrophysics Data System (ADS)
White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John
2017-08-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral
in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John
2017-01-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
NASA Astrophysics Data System (ADS)
Nüske, Feliks; Wu, Hao; Prinz, Jan-Hendrik; Wehmeyer, Christoph; Clementi, Cecilia; Noé, Frank
2017-03-01
Many state-of-the-art methods for the thermodynamic and kinetic characterization of large and complex biomolecular systems by simulation rely on ensemble approaches, where data from large numbers of relatively short trajectories are integrated. In this context, Markov state models (MSMs) are extremely popular because they can be used to compute stationary quantities and long-time kinetics from ensembles of short simulations, provided that these short simulations are in "local equilibrium" within the MSM states. However, over the last 15 years since the inception of MSMs, it has been controversially discussed and not yet been answered how deviations from local equilibrium can be detected, whether these deviations induce a practical bias in MSM estimation, and how to correct for them. In this paper, we address these issues: We systematically analyze the estimation of MSMs from short non-equilibrium simulations, and we provide an expression for the error between unbiased transition probabilities and the expected estimate from many short simulations. We show that the unbiased MSM estimate can be obtained even from relatively short non-equilibrium simulations in the limit of long lag times and good discretization. Further, we exploit observable operator model (OOM) theory to derive an unbiased estimator for the MSM transition matrix that corrects for the effect of starting out of equilibrium, even when short lag times are used. Finally, we show how the OOM framework can be used to estimate the exact eigenvalues or relaxation time scales of the system without estimating an MSM transition matrix, which allows us to practically assess the discretization quality of the MSM. Applications to model systems and molecular dynamics simulation data of alanine dipeptide are included for illustration. The improved MSM estimator is implemented in PyEMMA of version 2.3.
ERIC Educational Resources Information Center
Wang, Lijuan; McArdle, John J.
2008-01-01
The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…
Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886
Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.
Thornton, P. K.; Bowen, W. T.; Ravelo, A.C.; Wilkens, P. W.; Farmer, G.; Brock, J.; Brink, J. E.
1997-01-01
Early warning of impending poor crop harvests in highly variable environments can allow policy makers the time they need to take appropriate action to ameliorate the effects of regional food shortages on vulnerable rural and urban populations. Crop production estimates for the current season can be obtained using crop simulation models and remotely sensed estimates of rainfall in real time, embedded in a geographic information system that allows simple analysis of simulation results. A prototype yield estimation system was developed for the thirty provinces of Burkina Faso. It is based on CERES-Millet, a crop simulation model of the growth and development of millet (Pennisetum spp.). The prototype was used to estimate millet production in contrasting seasons and to derive production anomaly estimates for the 1986 season. Provincial yields simulated halfway through the growing season were generally within 15% of their final (end-of-season) values. Although more work is required to produce an operational early warning system of reasonable credibility, the methodology has considerable potential for providing timely estimates of regional production of the major food crops in countries of sub-Saharan Africa.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation
NASA Astrophysics Data System (ADS)
Lim, Tae W.
2015-06-01
A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.
Jennifer C. Jenkins; Richard A. Birdsey
2000-01-01
As interest grows in the role of forest growth in the carbon cycle, and as simulation models are applied to predict future forest productivity at large spatial scales, the need for reliable and field-based data for evaluation of model estimates is clear. We created estimates of potential forest biomass and annual aboveground production for the Chesapeake Bay watershed...
Li, Chen; Nagasaki, Masao; Koh, Chuan Hock; Miyano, Satoru
2011-05-01
Mathematical modeling and simulation studies are playing an increasingly important role in helping researchers elucidate how living organisms function in cells. In systems biology, researchers typically tune many parameters manually to achieve simulation results that are consistent with biological knowledge. This severely limits the size and complexity of simulation models built. In order to break this limitation, we propose a computational framework to automatically estimate kinetic parameters for a given network structure. We utilized an online (on-the-fly) model checking technique (which saves resources compared to the offline approach), with a quantitative modeling and simulation architecture named hybrid functional Petri net with extension (HFPNe). We demonstrate the applicability of this framework by the analysis of the underlying model for the neuronal cell fate decision model (ASE fate model) in Caenorhabditis elegans. First, we built a quantitative ASE fate model containing 3327 components emulating nine genetic conditions. Then, using our developed efficient online model checker, MIRACH 1.0, together with parameter estimation, we ran 20-million simulation runs, and were able to locate 57 parameter sets for 23 parameters in the model that are consistent with 45 biological rules extracted from published biological articles without much manual intervention. To evaluate the robustness of these 57 parameter sets, we run another 20 million simulation runs using different magnitudes of noise. Our simulation results concluded that among these models, one model is the most reasonable and robust simulation model owing to the high stability against these stochastic noises. Our simulation results provide interesting biological findings which could be used for future wet-lab experiments.
Using LANDSAT to provide potato production estimates to Columbia Basin farmers and processors
NASA Technical Reports Server (NTRS)
1991-01-01
The estimation of potato yields in the Columbia basin is described. The fundamental objective is to provide CROPIX with working models of potato production. A two-pronged approach was used to yield estimation: (1) using simulation models, and (2) using purely empirical models. The simulation modeling approach used satellite observations to determine certain key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Purely empirical models were developed to relate yield to some spectrally derived measure of crop development. Two empirical approaches are presented: one relates tuber yield to estimates of cumulative intercepted solar radiation, the other relates tuber yield to the integral under GVI (Global Vegetation Index) curve.
USDA-ARS?s Scientific Manuscript database
Accurate phosphorus (P) loss estimation from agricultural land is important for development of best management practices and protection of water quality. The Agricultural Policy/Environmental Extender (APEX) model is a powerful simulation model designed to simulate edge-of-field water, sediment, an...
Estimating postfire water production in the Pacific Northwest
Donald F. Potts; David L. Peterson; Hans R. Zuuring
1989-01-01
Two hydrologic models were adapted to estimate postfire changer in water yield in Pacific Northwest watersheds. The WRENSS version of the simulation model PROSPER is used for hydrologic regimes dominated by rainfall: it calculates water available for streamflow onthe basis of seasonal precipitation and leaf area index. The WRENSS version of the simulation model WATBAL...
ERIC Educational Resources Information Center
Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel
2012-01-01
In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
Ockerman, Darwin J.
2005-01-01
The U.S. Geological Survey, in cooperation with the San Antonio Water System, constructed three watershed models using the Hydrological Simulation Program—FORTRAN (HSPF) to simulate streamflow and estimate recharge to the Edwards aquifer in the Hondo Creek, Verde Creek, and San Geronimo Creek watersheds in south-central Texas. The three models were calibrated and tested with available data collected during 1992–2003. Simulations of streamflow and recharge were done for 1951–2003. The approach to construct the models was to first calibrate the Hondo Creek model (with an hourly time step) using 1992–99 data and test the model using 2000–2003 data. The Hondo Creek model parameters then were applied to the Verde Creek and San Geronimo Creek watersheds to construct the Verde Creek and San Geronimo Creek models. The simulated streamflows for Hondo Creek are considered acceptable. Annual, monthly, and daily simulated streamflows adequately match measured values, but simulated hourly streamflows do not. The accuracy of streamflow simulations for Verde Creek is uncertain. For San Geronimo Creek, the match of measured and simulated annual and monthly streamflows is acceptable (or nearly so); but for daily and hourly streamflows, the calibration is relatively poor. Simulated average annual total streamflow for 1951–2003 to Hondo Creek, Verde Creek, and San Geronimo Creek is 45,400; 32,400; and 11,100 acre-feet, respectively. Simulated average annual streamflow at the respective watershed outlets is 13,000; 16,200; and 6,920 acre-feet. The difference between total streamflow and streamflow at the watershed outlet is streamflow lost to channel infiltration. Estimated average annual Edwards aquifer recharge for Hondo Creek, Verde Creek, and San Geronimo Creek watersheds for 1951–2003 is 37,900 acrefeet (5.04 inches), 26,000 acre-feet (3.36 inches), and 5,940 acre-feet (1.97 inches), respectively. Most of the recharge (about 77 percent for the three watersheds together) occurs as streamflow channel infiltration. Diffuse recharge (direct infiltration of rainfall to the aquifer) accounts for the remaining 23 percent of recharge. For the Hondo Creek watershed, the HSPF recharge estimates for 1992–2003 averaged about 22 percent less than those estimated by the Puente method, a method the U.S. Geological Survey has used to compute annual recharge to the Edwards aquifer since 1978. HSPF recharge estimates for the Verde Creek watershed average about 40 percent less than those estimated by the Puente method.
NASA Astrophysics Data System (ADS)
Swenson, S. C.; Lawrence, D. M.
2014-12-01
Estimating the relative contributions of human withdrawals and climate variability to changes in groundwater is a challenging task at present. One method that has been used recently is a model-data synthesis combining GRACE total water storage estimates with simulated water storage estimates from land surface models. In this method, water storage changes due to natural climate variations simulated by a model are removed from total water storage changes observed by GRACE; the residual is then interpreted as anthropogenic groundwater change. If the modeled water storage estimate contains systematic errors, these errors will also be present in the residual groundwater estimate. For example, simulations performed with the Community Land Model (CLM; the land component of the Community Earth System Model) generally show a weak (as much as 50% smaller) seasonal cycle of water storage in semi-arid regions when compared to GRACE satellite water storage estimates. This bias propagates into GRACE-CLM anthropogenic groundwater change estimates, which then exhibit unphysical seasonal variability. The CLM bias can be traced to the parameterization of soil evaporative resistance. Incorporating a new soil resistance parameterization in CLM greatly reduces the seasonal bias with respect to GRACE. In this study, we compare the improved CLM water storage estimates to GRACE and discuss the implications for estimates of anthropogenic groundwater withdrawal, showing examples for the Middle East and Southwestern United States.
Stability of INFIT and OUTFIT Compared to Simulated Estimates in Applied Setting.
Hodge, Kari J; Morgan, Grant B
Residual-based fit statistics are commonly used as an indication of the extent to which the item response data fit the Rash model. Fit statistic estimates are influenced by sample size and rules-of thumb estimates may result in incorrect conclusions about the extent to which the model fits the data. Estimates obtained in this analysis were compared to 250 simulated data sets to examine the stability of the estimates. All INFIT estimates were within the rule-of-thumb range of 0.7 to 1.3. However, only 82% of the INFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's INFIT distributions using this 95% confidence-like interval. This is a 18 percentage point difference in items that were classified as acceptable. Fourty-eight percent of OUTFIT estimates fell within the 0.7 to 1.3 rule- of-thumb range. Whereas 34% of OUTFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's OUTFIT distributions. This is a 13 percentage point difference in items that were classified as acceptable. When using the rule-of- thumb ranges for fit estimates the magnitude of misfit was smaller than with the 95% confidence interval of the simulated distribution. The findings indicate that the use of confidence intervals as critical values for fit statistics leads to different model data fit conclusions than traditional rule of thumb critical values.
Comparing Mapped Plot Estimators
Paul C. Van Deusen
2006-01-01
Two alternative derivations of estimators for mean and variance from mapped plots are compared by considering the models that support the estimators and by simulation. It turns out that both models lead to the same estimator for the mean but lead to very different variance estimators. The variance estimators based on the least valid model assumptions are shown to...
Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices
Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling
2008-01-01
The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...
Jonsen, Ian
2016-02-08
State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.
Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2016-01-01
Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horiike, S.; Okazaki, Y.
This paper describes a performance estimation tool developed for modeling and simulation of open distributed energy management systems to support their design. The approach of discrete event simulation with detailed models is considered for efficient performance estimation. The tool includes basic models constituting a platform, e.g., Ethernet, communication protocol, operating system, etc. Application softwares are modeled by specifying CPU time, disk access size, communication data size, etc. Different types of system configurations for various system activities can be easily studied. Simulation examples show how the tool is utilized for the efficient design of open distributed energy management systems.
NASA Astrophysics Data System (ADS)
Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick
2015-04-01
Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is under-estimated on both days in the residual layer, and over-estimated on both days over the residual layer. The under-estimations in the residual layer are partly due to difficulties to estimate the PBL height, to an over-estimation of vertical mixing during nighttime at high altitudes and to uncertainties in PM10 emissions. The PBL schemes and the UCM influence the PM vertical distributions not only because they influence vertical mixing (PBL height and eddy-diffusion coefficient), but also horizontal wind fields and humidity. However, for the UCM, it is the influence on vertical mixing that impacts the most the PM10 vertical distribution below 1.5 km.
Davis, Kyle W.; Long, Andrew J.
2018-05-31
The U.S. Geological Survey developed a groundwater-flow model for the uppermost principal aquifer systems in the Williston Basin in parts of Montana, North Dakota, and South Dakota in the United States and parts of Manitoba and Saskatchewan in Canada as part of a detailed assessment of the groundwater availability in the area. The assessment was done because of the potential for increased demands and stresses on groundwater associated with large-scale energy development in the area. As part of this assessment, a three-dimensional groundwater-flow model was developed as a tool that can be used to simulate how the groundwater-flow system responds to changes in hydrologic stresses at a regional scale.The three-dimensional groundwater-flow model was developed using the U.S. Geological Survey’s numerical finite-difference groundwater model with the Newton-Rhapson solver, MODFLOW–NWT, to represent the glacial, lower Tertiary, and Upper Cretaceous aquifer systems for steady-state (mean) hydrological conditions for 1981‒2005 and for transient (temporally varying) conditions using a combination of a steady-state period for pre-1960 and transient periods for 1961‒2005. The numerical model framework was constructed based on existing and interpreted hydrogeologic and geospatial data and consisted of eight layers. Two layers were used to represent the glacial aquifer system in the model; layer 1 represented the upper one-half and layer 2 represented the lower one-half of the glacial aquifer system. Three layers were used to represent the lower Tertiary aquifer system in the model; layer 3 represented the upper Fort Union aquifer, layer 4 represented the middle Fort Union hydrogeologic unit, and layer 5 represented the lower Fort Union aquifer. Three layers were used to represent the Upper Cretaceous aquifer system in the model; layer 6 represented the upper Hell Creek hydrogeologic unit, layer 7 represented the lower Hell Creek aquifer, and layer 8 represented the Fox Hills aquifer. The numerical model was constructed using a uniform grid with square cells that are about 1 mile (1,600 meters) on each side with a total of about 657,000 active cells.Model calibration was completed by linking Parameter ESTimation (PEST) software with MODFLOW–NWT. The PEST software uses statistical parameter estimation techniques to identify an optimum set of input parameters by adjusting individual model input parameters and assessing the differences, or residuals, between observed (measured or estimated) data and simulated values. Steady-state model calibration consisted of attempting to match mean simulated values to measured or estimated values of (1) hydraulic head, (2) hydraulic head differences between model layers, (3) stream infiltration, and (4) discharge to streams. Calibration of the transient model consisted of attempting to match simulated and measured temporally distributed values of hydraulic head changes, stream base flow, and groundwater discharge to artesian flowing wells. Hydraulic properties estimated through model calibration included hydraulic conductivity, vertical hydraulic conductivity, aquifer storage, and riverbed hydraulic conductivity in addition to groundwater recharge and well skin.The ability of the numerical model to accurately simulate groundwater flow in the Williston Basin was assessed primarily by its ability to match calibration targets for hydraulic head, stream base flow, and flowing well discharge. The steady-state model also was used to assess the simulated potentiometric surfaces in the upper Fort Union aquifer, the lower Fort Union aquifer, and the Fox Hills aquifer. Additionally, a previously estimated regional groundwater-flow budget was compared with the simulated steady-state groundwater-flow budget for the Williston Basin. The simulated potentiometric surfaces typically compared well with the estimated potentiometric surfaces based on measured hydraulic head data and indicated localized groundwater-flow gradients that were topographically controlled in outcrop areas and more generalized regional gradients where the aquifers were confined. The differences between the measured and simulated (residuals) hydraulic head values for 11,109 wells were assessed, which indicated that the steady-state model generally underestimated hydraulic head in the model area. This underestimation is indicated by a positive mean residual of 11.2 feet for all model layers. Layer 7, which represents the lower Hell Creek aquifer, is the only layer for which the steady-state model overestimated hydraulic head. Simulated groundwater-level changes for the transient model matched within plus or minus 2.5 feet of the measured values for more than 60 percent of all measurements and to within plus or minus 17.5 feet for 95 percent of all measurements; however, the transient model underestimated groundwater-level changes for all model layers. A comparison between simulated and estimated base flows for the steady-state and transient models indicated that both models overestimated base flow in streams and underestimated annual fluctuations in base flow.The estimated and simulated groundwater budgets indicate the model area received a substantial amount of recharge from precipitation and stream infiltration. The steady-state model indicated that reservoir seepage was a larger component of recharge in the Williston Basin than was previously estimated. Irrigation recharge and groundwater inflow from outside the Williston Basin accounted for a relatively small part of total groundwater recharge when compared with recharge from precipitation, stream infiltration, and reservoir seepage. Most of the estimated and simulated groundwater discharge in the Williston Basin was to streams and reservoirs. Simulated groundwater withdrawal, discharge to reservoirs, and groundwater outflow in the Williston Basin accounted for a smaller part of total groundwater discharge.The transient model was used to simulate discharge to 571 flowing artesian wells within the model area. Of the 571 established flowing artesian wells simulated by the model, 271 wells did not flow at any time during the simulation because hydraulic head was always below the land-surface altitude. As hydraulic head declined throughout the simulation, 68 of these wells responded by ceasing to flow by the end of 2005. Total mean simulated discharge for the 571 flowing artesian wells was 55.1 cubic feet per second (ft3/s), and the mean simulated flowing well discharge for individual wells was 0.118 ft3/s. Simulated discharge to individual flowing artesian wells increased from 0.039 to 0.177 ft3/s between 1961 and 1975 and decreased to 0.102 ft3/s by 2005. The mean residual for 34 flowing wells with measured discharge was 0.014 ft3/s, which indicates the transient model overestimated discharge to flowing artesian wells in the model area.Model limitations arise from aspects of the conceptual model and from simplifications inherent in the construction and calibration of a regional-scale numerical groundwater-flow model. Simplifying assumptions in defining hydraulic parameters in space and hydrologic stresses and time-varying observational data in time can limit the capabilities of this tool to simulate how the groundwater-flow system responds to changes in hydrologic stresses, particularly at the local scale; nevertheless, the steady-state model adequately simulated flow in the uppermost principal aquifer systems in the Williston Basin based on the comparison between the simulated and estimated groundwater-flow budget, the comparison between simulated and estimated potentiometric surfaces, and the results of the calibration process.
NASA Astrophysics Data System (ADS)
Angel, Erin
Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.
Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A
1996-04-01
This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model.
Processes influencing model-data mismatch in drought-stressed, fire-disturbed eddy flux sites
NASA Astrophysics Data System (ADS)
Mitchell, S. R.; Beven, K.; Freer, J. E.; Law, B. E.
2010-12-01
Semi-arid forests are very sensitive to climatic change and among the most difficult ecosystems to accurately model. We tested the performance of the Biome-BGC model against eddy flux data taken from young (years 2004-2008), mature (years 2002-2008), and old-growth (year 2000) Ponderosa pine stands at Metolius, Oregon, and subsequently examined several potential causes for model-data mismatch. We used the generalized likelihood uncertainty estimation (GLUE) methodology, which involved 500,000 model runs for each stand (1,500,000 total). Each simulation was run with randomly generated parameter values from a uniform distribution based on published parameter ranges, resulting in modeled estimates of net ecosystem CO2 exchange (NEE) that were compared to measured eddy flux data. Simulations for the young stand exhibited the highest level of performance, though they over-estimated ecosystem C accumulation (-NEE) 99% of the time. Among the simulations for the mature and old-growth stands, 100% and 99% of the simulations under-estimated ecosystem C accumulation. One obvious area of model-data mismatch is soil moisture, which was overestimated by the model in the young and old-growth stands yet underestimated in the mature stand. However, modeled estimates of soil water content and associated water deficits did not appear to be the primary cause of model-data mismatch; our analysis indicated that gross primary production can be accurately modeled even if soil moisture content is not. Instead, difficulties in adequately modeling ecosystem respiration, both autotrophic and heterotrophic, appeared to be fundamental causes of model-data mismatch.
Huntington II Simulation Program - TAG. Student Workbook, Teacher's Guide, and Resource Handbook.
ERIC Educational Resources Information Center
Friedland, James
Presented are instructions for the use of "TAG," a model for estimating animal population in a given area. The computer program asks the student to estimate the number of bass in a simulated farm pond using the technique of tagging and recovery. The objective of the simulation is to teach principles for estimating animal populations when they…
He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.
Finite-element simulation of ground-water flow in the vicinity of Yucca Mountain, Nevada-California
Czarnecki, J.B.; Waddell, R.K.
1984-01-01
A finite-element model of the groundwater flow system in the vicinity of Yucca Mountain at the Nevada Test Site was developed using parameter estimation techniques. The model simulated steady-state ground-water flow occurring in tuffaceous, volcanic , and carbonate rocks, and alluvial aquifers. Hydraulic gradients in the modeled area range from 0.00001 for carbonate aquifers to 0.19 for barriers in tuffaceous rocks. Three model parameters were used in estimating transmissivity in six zones. Simulated hydraulic-head values range from about 1,200 m near Timber Mountain to about 300 m near Furnace Creek Ranch. Model residuals for simulated versus measured hydraulic heads range from -28.6 to 21.4 m; most are less than +/-7 m, indicating an acceptable representation of the hydrologic system by the model. Sensitivity analyses of the model 's flux boundary condition variables were performed to assess the effect of varying boundary fluxes on the calculation of estimated model transmissivities. Varying the flux variables representing discharge at Franklin Lake and Furnace Creek Ranch has greater effect than varying other flux variables. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities
NASA Astrophysics Data System (ADS)
Baylin-Stern, Adam C.
This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.
Elenchezhiyan, M; Prakash, J
2015-09-01
In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
White, Edward W; Lumley, Thomas; Goodreau, Steven M; Goldbaum, Gary; Hawes, Stephen E
2010-12-01
To produce valid seroincidence estimates, the serological testing algorithm for recent HIV seroconversion (STARHS) assumes independence between infection and testing, which may be absent in clinical data. STARHS estimates are generally greater than cohort-based estimates of incidence from observable person-time and diagnosis dates. The authors constructed a series of partial stochastic models to examine whether testing motivated by suspicion of infection could bias STARHS. One thousand Monte Carlo simulations of 10,000 men who have sex with men were generated using parameters for HIV incidence and testing frequency from data from a clinical testing population in Seattle. In one set of simulations, infection and testing dates were independent. In another set, some intertest intervals were abbreviated to reflect the distribution of intervals between suspected HIV exposure and testing in a group of Seattle men who have sex with men recently diagnosed as having HIV. Both estimation methods were applied to the simulated datasets. Both cohort-based and STARHS incidence estimates were calculated using the simulated data and compared with previously calculated, empirical cohort-based and STARHS seroincidence estimates from the clinical testing population. Under simulated independence between infection and testing, cohort-based and STARHS incidence estimates resembled cohort estimates from the clinical dataset. Under simulated motivated testing, cohort-based estimates remained unchanged, but STARHS estimates were inflated similar to empirical STARHS estimates. Varying motivation parameters appreciably affected STARHS incidence estimates, but not cohort-based estimates. Cohort-based incidence estimates are robust against dependence between testing and acquisition of infection, whereas STARHS incidence estimates are not.
Grummer, Jared A; Bryson, Robert W; Reeder, Tod W
2014-03-01
Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.
Ackerman, Daniel J.; Rousseau, Joseph P.; Rattray, Gordon W.; Fisher, Jason C.
2010-01-01
Three-dimensional steady-state and transient models of groundwater flow and advective transport in the eastern Snake River Plain aquifer were developed by the U.S. Geological Survey in cooperation with the U.S. Department of Energy. The steady-state and transient flow models cover an area of 1,940 square miles that includes most of the 890 square miles of the Idaho National Laboratory (INL). A 50-year history of waste disposal at the INL has resulted in measurable concentrations of waste contaminants in the eastern Snake River Plain aquifer. Model results can be used in numerical simulations to evaluate the movement of contaminants in the aquifer. Saturated flow in the eastern Snake River Plain aquifer was simulated using the MODFLOW-2000 groundwater flow model. Steady-state flow was simulated to represent conditions in 1980 with average streamflow infiltration from 1966-80 for the Big Lost River, the major variable inflow to the system. The transient flow model simulates groundwater flow between 1980 and 1995, a period that included a 5-year wet cycle (1982-86) followed by an 8-year dry cycle (1987-94). Specified flows into or out of the active model grid define the conditions on all boundaries except the southwest (outflow) boundary, which is simulated with head-dependent flow. In the transient flow model, streamflow infiltration was the major stress, and was variable in time and location. The models were calibrated by adjusting aquifer hydraulic properties to match simulated and observed heads or head differences using the parameter-estimation program incorporated in MODFLOW-2000. Various summary, regression, and inferential statistics, in addition to comparisons of model properties and simulated head to measured properties and head, were used to evaluate the model calibration. Model parameters estimated for the steady-state calibration included hydraulic conductivity for seven of nine hydrogeologic zones and a global value of vertical anisotropy. Parameters estimated for the transient calibration included specific yield for five of the seven hydrogeologic zones. The zones represent five rock units and parts of four rock units with abundant interbedded sediment. All estimates of hydraulic conductivity were nearly within 2 orders of magnitude of the maximum expected value in a range that exceeds 6 orders of magnitude. The estimate of vertical anisotropy was larger than the maximum expected value. All estimates of specific yield and their confidence intervals were within the ranges of values expected for aquifers, the range of values for porosity of basalt, and other estimates of specific yield for basalt. The steady-state model reasonably simulated the observed water-table altitude, orientation, and gradients. Simulation of transient flow conditions accurately reproduced observed changes in the flow system resulting from episodic infiltration from the Big Lost River and facilitated understanding and visualization of the relative importance of historical differences in infiltration in time and space. As described in a conceptual model, the numerical model simulations demonstrate flow that is (1) dominantly horizontal through interflow zones in basalt and vertical anisotropy resulting from contrasts in hydraulic conductivity of various types of basalt and the interbedded sediments, (2) temporally variable due to streamflow infiltration from the Big Lost River, and (3) moving downward downgradient of the INL. The numerical models were reparameterized, recalibrated, and analyzed to evaluate alternative conceptualizations or implementations of the conceptual model. The analysis of the reparameterized models revealed that little improvement in the model could come from alternative descriptions of sediment content, simulated aquifer thickness, streamflow infiltration, and vertical head distribution on the downgradient boundary. Of the alternative estimates of flow to or from the aquifer, only a 20 percent decrease in
Flood Scenario Simulation and Disaster Estimation of Ba-Ma Creek Watershed in Nantou County, Taiwan
NASA Astrophysics Data System (ADS)
Peng, S. H.; Hsu, Y. K.
2018-04-01
The present study proposed several scenario simulations of flood disaster according to the historical flood event and planning requirement in Ba-Ma Creek Watershed located in Nantou County, Taiwan. The simulations were made using the FLO-2D model, a numerical model which can compute the velocity and depth of flood on a two-dimensional terrain. Meanwhile, the calculated data were utilized to estimate the possible damage incurred by the flood disaster. The results thus obtained can serve as references for disaster prevention. Moreover, the simulated results could be employed for flood disaster estimation using the method suggested by the Water Resources Agency of Taiwan. Finally, the conclusions and perspectives are presented.
Benefit-cost estimation for alternative drinking water maximum contaminant levels
NASA Astrophysics Data System (ADS)
Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.
2001-08-01
A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...
Estimation and simulation of multi-beam sonar noise.
Holmin, Arne Johannes; Korneliussen, Rolf J; Tjøstheim, Dag
2016-02-01
Methods for the estimation and modeling of noise present in multi-beam sonar data, including the magnitude, probability distribution, and spatial correlation of the noise, are developed. The methods consider individual acoustic samples and facilitate compensation of highly localized noise as well as subtraction of noise estimates averaged over time. The modeled noise is included in an existing multi-beam sonar simulation model [Holmin, Handegard, Korneliussen, and Tjøstheim, J. Acoust. Soc. Am. 132, 3720-3734 (2012)], resulting in an improved model that can be used to strengthen interpretation of data collected in situ at any signal to noise ratio. Two experiments, from the former study in which multi-beam sonar data of herring schools were simulated, are repeated with inclusion of noise. These experiments demonstrate (1) the potentially large effect of changes in fish orientation on the backscatter from a school, and (2) the estimation of behavioral characteristics such as the polarization and packing density of fish schools. The latter is achieved by comparing real data with simulated data for different polarizations and packing densities.
Linear and nonlinear ARMA model parameter estimation using an artificial neural network
NASA Technical Reports Server (NTRS)
Chon, K. H.; Cohen, R. J.
1997-01-01
This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.
Kevin Schaefer; Christopher R. Schwalm; Chris Williams; M. Altaf Arain; Alan Barr; Jing M. Chen; Kenneth J. Davis; Dimitre Dimitrov; Timothy W. Hilton; David Y. Hollinger; Elyn Humphreys; Benjamin Poulter; Brett M. Raczka; Andrew D. Richardson; Alok Sahoo; Peter Thornton; Rodrigo Vargas; Hans Verbeeck; Ryan Anderson; Ian Baker; T. Andrew Black; Paul Bolstad; Jiquan Chen; Peter S. Curtis; Ankur R. Desai; Michael Dietze; Danilo Dragoni; Christopher Gough; Robert F. Grant; Lianhong Gu; Atul Jain; Chris Kucharik; Beverly Law; Shuguang Liu; Erandathie Lokipitiya; Hank A. Margolis; Roser Matamala; J. Harry McCaughey; Russ Monson; J. William Munger; Walter Oechel; Changhui Peng; David T. Price; Dan Ricciuto; William J. Riley; Nigel Roulet; Hanqin Tian; Christina Tonitto; Margaret Torn; Ensheng Weng; Xiaolu Zhou
2012-01-01
Accurately simulating gross primary productivity (GPP) in terrestrial ecosystem models is critical because errors in simulated GPP propagate through the model to introduce additional errors in simulated biomass and other fluxes. We evaluated simulated, daily average GPP from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States...
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Progress report on daily flow-routing simulation for the Carson River, California and Nevada
Hess, G.W.
1996-01-01
A physically based flow-routing model using Hydrological Simulation Program-FORTRAN (HSPF) was constructed for modeling streamflow in the Carson River at daily time intervals as part of the Truckee-Carson Program of the U.S. Geological Survey (USGS). Daily streamflow data for water years 1978-92 for the mainstem river, tributaries, and irrigation ditches from the East Fork Carson River near Markleeville and West Fork Carson River at Woodfords down to the mainstem Carson River at Fort Churchill upstream from Lahontan Reservoir were obtained from several agencies and were compiled into a comprehensive data base. No previous physically based flow-routing model of the Carson River has incorporated multi-agency streamflow data into a single data base and simulated flow at a daily time interval. Where streamflow data were unavailable or incomplete, hydrologic techniques were used to estimate some flows. For modeling purposes, the Carson River was divided into six segments, which correspond to those used in the Alpine Decree that governs water rights along the river. Hydraulic characteristics were defined for 48 individual stream reaches based on cross-sectional survey data obtained from field surveys and previous studies. Simulation results from the model were compared with available observed and estimated streamflow data. Model testing demonstrated that hydraulic characteristics of the Carson River are adequately represented in the models for a range of flow regimes. Differences between simulated and observed streamflow result mostly from inadequate data characterizing inflow and outflow from the river. Because irrigation return flows are largely unknown, irrigation return flow percentages were used as a calibration parameter to minimize differences between observed and simulated streamflows. Observed and simulated streamflow were compared for daily periods for the full modeled length of the Carson River and for two major subreaches modeled with more detailed input data. Hydrographs and statistics presented in this report describe these differences. A sensitivity analysis of four estimated components of the hydrologic system evaluated which components were significant in the model. Estimated ungaged tributary streamflow is not a significant component of the model during low runoff, but is significant during high runoff. The sensitivity analysis indicates that changes in the estimated irrigation diversion and estimated return flow creates a noticeable change in the statistics. The modeling for this study is preliminary. Results of the model are constrained by current availability and accuracy of observed hydrologic data. Several inflows and outflows of the Carson River are not described by time-series data and therefore are not represented in the model.
Gomes Junior, Saint Clair Santos; Almeida, Rosimary Terezinha
2009-02-01
To develop a simulation model using public data to estimate the cancer care infrastructure required by the public health system in the state of São Paulo, Brazil. Public data from the Unified Health System database regarding cancer surgery, chemotherapy, and radiation therapy, from January 2002-January 2004, were used to estimate the number of cancer cases in the state. The percentages recorded for each therapy in the Hospital Cancer Registry of Brazil were combined with the data collected from the database to estimate the need for services. Mixture models were used to identify subgroups of cancer cases with regard to the length of time that chemotherapy and radiation therapy were required. A simulation model was used to estimate the infrastructure required taking these parameters into account. The model indicated the need for surgery in 52.5% of the cases, radiation therapy in 42.7%, and chemotherapy in 48.5%. The mixture models identified two subgroups for radiation therapy and four subgroups for chemotherapy with regard to mean usage time for each. These parameters allowed the following estimated infrastructure needs to be made: 147 operating rooms, 2 653 operating beds, 297 chemotherapy chairs, and 102 radiation therapy devices. These estimates suggest the need for a 1.2-fold increase in the number of chemotherapy services and a 2.4-fold increase in the number of radiation therapy services when compared with the parameters currently used by the public health system. A simulation model, such as the one used in the present study, permits better distribution of health care resources because it is based on specific, local needs.
Modeling Of In-Vehicle Human Exposure to Ambient Fine Particulate Matter
Liu, Xiaozhen; Frey, H. Christopher
2012-01-01
A method for estimating in-vehicle PM2.5 exposure as part of a scenario-based population simulation model is developed and assessed. In existing models, such as the Stochastic Exposure and Dose Simulation model for Particulate Matter (SHEDS-PM), in-vehicle exposure is estimated using linear regression based on area-wide ambient PM2.5 concentration. An alternative modeling approach is explored based on estimation of near-road PM2.5 concentration and an in-vehicle mass balance. Near-road PM2.5 concentration is estimated using a dispersion model and fixed site monitor (FSM) data. In-vehicle concentration is estimated based on air exchange rate and filter efficiency. In-vehicle concentration varies with road type, traffic flow, windspeed, stability class, and ventilation. Average in-vehicle exposure is estimated to contribute 10 to 20 percent of average daily exposure. The contribution of in-vehicle exposure to total daily exposure can be higher for some individuals. Recommendations are made for updating exposure models and implementation of the alternative approach. PMID:23101000
NaCl nucleation from brine in seeded simulations: Sources of uncertainty in rate estimates.
Zimmermann, Nils E R; Vorselaars, Bart; Espinosa, Jorge R; Quigley, David; Smith, William R; Sanz, Eduardo; Vega, Carlos; Peters, Baron
2018-06-14
This work reexamines seeded simulation results for NaCl nucleation from a supersaturated aqueous solution at 298.15 K and 1 bar pressure. We present a linear regression approach for analyzing seeded simulation data that provides both nucleation rates and uncertainty estimates. Our results show that rates obtained from seeded simulations rely critically on a precise driving force for the model system. The driving force vs. solute concentration curve need not exactly reproduce that of the real system, but it should accurately describe the thermodynamic properties of the model system. We also show that rate estimates depend strongly on the nucleus size metric. We show that the rate estimates systematically increase as more stringent local order parameters are used to count members of a cluster and provide tentative suggestions for appropriate clustering criteria.
Sepúlveda, Nicasio; Tiedeman, Claire; O'Reilly, Andrew M.; Davis, Jeffrey B.; Burger, Patrick
2012-01-01
A numerical transient model of the surficial and Floridan aquifer systems in east-central Florida was developed to (1) increase the understanding of water exchanges between the surficial and the Floridan aquifer systems, (2) assess the recharge rates to the surficial aquifer system from infiltration through the unsaturated zone and (3) obtain a simulation tool that could be used by water-resource managers to assess the impact of changes in groundwater withdrawals on spring flows and on the potentiometric surfaces of the hydrogeologic units composing the Floridan aquifer system. The hydrogeology of east-central Florida was evaluated and used to develop and calibrate the groundwater flow model, which simulates the regional fresh groundwater flow system. The U.S. Geological Survey three-dimensional groundwater flow model, MODFLOW-2005, was used to simulate transient groundwater flow in the surficial, intermediate, and Floridan aquifer systems from 1995 to 2006. The East-Central Florida Transient model encompasses an actively simulated area of about 9,000 square miles. Although the model includes surficial processes-rainfall, irrigation, evapotranspiration (ET), runoff, infiltration, lake water levels, and stream water levels and flows-its primary purpose is to characterize and refine the understanding of groundwater flow in the Floridan aquifer system. Model-independent estimates of the partitioning of rainfall into ET, streamflow, and aquifer recharge are provided from a water-budget analysis of the surficial aquifer system. The interaction of the groundwater flow system with the surface environment was simulated using the Green-Ampt infiltration method and the MODFLOW-2005 Unsaturated-Zone Flow, Lake, and Streamflow-Routing Packages. The model is intended to simulate the part of the groundwater system that contains freshwater. The bottom and lateral boundaries of the model were established at the estimated depths where the chloride concentration is 5,000 milligrams per liter in the Floridan aquifer system. Potential flow across the interface represented by this chloride concentration is simulated by the General Head Boundary Package. During 1995 through 2006, there were no major groundwater withdrawals near the freshwater and saline-water interface, making the general head boundary a suitable feature to estimate flow through the interface. The east-central Florida transient model was calibrated using the inverse parameter estimation code, PEST. Steady-state models for 1999 and 2003 were developed to estimate hydraulic conductivity (K) using average annual heads and spring flows as observations. The spatial variation of K was represented using zones of constant values in some layers, and pilot points in other layers. Estimated K values were within one order of magnitude of aquifer performance test data. A simulation of the final two years (2005-2006) of the 12-year model, with the K estimates from the steady-state calibration, was used to guide the estimation of specific yield and specific storage values. The final model yielded head and spring-flow residuals that met the calibration criteria for the 12-year transient simulation. The overall mean residual for heads, defining residual as simulated minus measured value, was -0.04 foot. The overall root-mean square residual for heads was less than 3.6 feet for each year in the 1995 to 2006 simulation period. The overall mean residual for spring flows was -0.3 cubic foot per second. The spatial distribution of head residuals was generally random, with some minor indications of bias. Simulated average ET over the 1995 to 2006 period was 34.47 inches per year, compared to the calculated average ET rate of 36.39 inches per year from the model-independent water-budget analysis. Simulated average net recharge to the surficial aquifer system was 3.58 inches per year, compared with the calculated average of 3.39 inches per year from the model-independent water-budget analysis. Groundwater withdrawals from the Floridan aquifer system averaged about 920 million gallons per day, which is equivalent to about 2 inches per year over the model area and slightly more than half of the simulated average net recharge to the surficial aquifer system over the same period. Annual net simulated recharge rates to the surficial aquifer system were less than the total groundwater withdrawals from the Floridan aquifer system only during the below-average rainfall years of 2000 and 2006.
Sepúlveda, Nicasio; Tiedeman, Claire; O'Reilly, Andrew M.; Davis, Jeffery B.; Burger, Patrick
2012-01-01
A numerical transient model of the surficial and Floridan aquifer systems in east-central Florida was developed to (1) increase the understanding of water exchanges between the surficial and the Floridan aquifer systems, (2) assess the recharge rates to the surficial aquifer system from infiltration through the unsaturated zone and (3) obtain a simulation tool that could be used by water-resource managers to assess the impact of changes in groundwater withdrawals on spring flows and on the potentiometric surfaces of the hydrogeologic units composing the Floridan aquifer system. The hydrogeology of east-central Florida was evaluated and used to develop and calibrate the groundwater flow model, which simulates the regional fresh groundwater flow system. The U.S. Geological Survey three-dimensional groundwater flow model, MODFLOW-2005, was used to simulate transient groundwater flow in the surficial, intermediate, and Floridan aquifer systems from 1995 to 2006. The east-central Florida transient model encompasses an actively simulated area of about 9,000 square miles. Although the model includes surficial processes-rainfall, irrigation, evapotranspiration, runoff, infiltration, lake water levels, and stream water levels and flows-its primary purpose is to characterize and refine the understanding of groundwater flow in the Floridan aquifer system. Model-independent estimates of the partitioning of rainfall into evapotranspiration, streamflow, and aquifer recharge are provided from a water-budget analysis of the surficial aquifer system. The interaction of the groundwater flow system with the surface environment was simulated using the Green-Ampt infiltration method and the MODFLOW-2005 Unsaturated-Zone Flow, Lake, and Streamflow-Routing Packages. The model is intended to simulate the part of the groundwater system that contains freshwater. The bottom and lateral boundaries of the model were established at the estimated depths where the chloride concentration is 5,000 milligrams per liter in the Floridan aquifer system. Potential flow across the interface represented by this chloride concentration is simulated by the General Head Boundary Package. During 1995 through 2006, there were no major groundwater withdrawals near the freshwater and saline-water interface, making the general head boundary a suitable feature to estimate flow through the interface. The east-central Florida transient model was calibrated using the inverse parameter estimation code, PEST. Steady-state models for 1999 and 2003 were developed to estimate hydraulic conductivity (K) using average annual heads and spring flows as observations. The spatial variation of K was represented using zones of constant values in some layers, and pilot points in other layers. Estimated K values were within one order of magnitude of aquifer performance test data. A simulation of the final two years (2005-2006) of the 12-year model, with the K estimates from the steady-state calibration, was used to guide the estimation of specific yield and specific storage values. The final model yielded head and spring-flow residuals that met the calibration criteria for the 12-year transient simulation. The overall mean residual for heads, defining residual as simulated minus measured value, was -0.04 foot. The overall root-mean square residual for heads was less than 3.6 feet for each year in the 1995 to 2006 simulation period. The overall mean residual for spring flows was -0.3 cubic foot per second. The spatial distribution of head residuals was generally random, with some minor indications of bias. Simulated average evapotranspiration (ET) over the 1995 to 2006 period was 34.5 inches per year, compared to the calculated average ET rate of 36.6 inches per year from the model-independent water-budget analysis. Simulated average net recharge to the surficial aquifer system was 3.6 inches per year, compared with the calculated average of 3.2 inches per year from the model-independent waterbudget analysis. Groundwater withdrawals from the Floridan aquifer system averaged about 800 million gallons per day, which is equivalent to about 2 inches per year over the model area and slightly more than half of the simulated average net recharge to the surficial aquifer system over the same period. Annual net simulated recharge rates to the surficial aquifer system were less than the total groundwater withdrawals from the Floridan aquifer system only during the below-average rainfall years of 2000 and 2006.
Aerodynamic loads on buses due to crosswind gusts: extended analysis
NASA Astrophysics Data System (ADS)
Drugge, Lars; Juhlin, Magnus
2010-12-01
The objective of this work is to use inverse simulations on measured vehicle data in order to estimate the aerodynamic loads on a bus when exposed to crosswind situations. Tyre forces, driver input, wind velocity and vehicle response were measured on a typical coach when subjected to natural crosswind gusts. Based on these measurements and a detailed MBS vehicle model, the aerodynamic loads were estimated through inverse simulations. In order to estimate the lift force, roll and pitch moments in addition to the lateral force and yaw moment, the simulation model was extended by also incorporating the estimation of the vertical road disturbances. The proposed method enables the estimation of aerodynamic loads due to crosswind gusts without using a full scale wind tunnel adapted for crosswind excitation.
NASA Astrophysics Data System (ADS)
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-10-01
Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
BOREAS RSS-8 BIOME-BGC Model Simulations at Tower Flux Sites in 1994
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John
2000-01-01
BIOME-BGC is a general ecosystem process model designed to simulate biogeochemical and hydrologic processes across multiple scales (Running and Hunt, 1993). In this investigation, BIOME-BGC was used to estimate daily water and carbon budgets for the BOREAS tower flux sites for 1994. Carbon variables estimated by the model include gross primary production (i.e., net photosynthesis), maintenance and heterotrophic respiration, net primary production, and net ecosystem carbon exchange. Hydrologic variables estimated by the model include snowcover, evaporation, transpiration, evapotranspiration, soil moisture, and outflow. The information provided by the investigation includes input initialization and model output files for various sites in tabular ASCII format.
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.15)
Long, Andrew J.
2015-01-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, spring flow, groundwater level, or solute transport for a measurement point in response to a system input of precipitation, recharge, or solute injection. I introduce the first version of RRAWFLOW available for download and public use and describe additional options. The open-source code is written in the R language and is available at http://sd.water.usgs.gov/projects/RRAWFLOW/RRAWFLOW.html along with an example model of streamflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution, i.e., the unit-hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Several options are included to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications (e.g., estimating missing periods in a hydrologic record). RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, K; Bostani, M; McNitt-Gray, M
2015-06-15
Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate themore » complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not. Funding Support: NIH Grant R01-EB017095; Disclosures - Michael McNitt-Gray: Institutional Research Agreement, Siemens AG; Research Support, Siemens AG; Consultant, Flaherty Sensabaugh Bonasso PLLC; Consultant, Fulbright and Jaworski; Disclosures - Cynthia McCollough: Research Grant, Siemens Healthcare.« less
NASA Astrophysics Data System (ADS)
Xu, Bin; Ye, Ming; Dong, Shuning; Dai, Zhenxue; Pei, Yongzhen
2018-07-01
Quantitative analysis of recession curves of karst spring hydrographs is a vital tool for understanding karst hydrology and inferring hydraulic properties of karst aquifers. This paper presents a new model for simulating karst spring recession curves. The new model has the following characteristics: (1) the model considers two separate but hydraulically connected reservoirs: matrix reservoir and conduit reservoir; (2) the model separates karst spring hydrograph recession into three stages: conduit-drainage stage, mixed-drainage stage (with both conduit drainage and matrix drainage), and matrix-drainage stage; and (3) in the mixed-drainage stage, the model uses multiple conduit layers to present different levels of conduit development. The new model outperforms the classical Mangin model and the recently developed Fiorillo model for simulating observed discharge at the Madison Blue Spring located in northern Florida. This is attributed to the latter two characteristics of the new model. Based on the new model, a method is developed for estimating effective porosity of the matrix and conduit reservoirs for the three drainage stages. The estimated porosity values are consistent with measured matrix porosity at the study site and with estimated conduit porosity reported in literature. The new model for simulating karst spring hydrograph recession is mathematically general, and can be applied to a wide range of karst spring hydrographs to understand groundwater flow in karst aquifers. The limitations of the model are discussed at the end of this paper.
Simulations of motor unit number estimation techniques
NASA Astrophysics Data System (ADS)
Major, Lora A.; Jones, Kelvin E.
2005-06-01
Motor unit number estimation (MUNE) is an electrodiagnostic procedure used to evaluate the number of motor axons connected to a muscle. All MUNE techniques rely on assumptions that must be fulfilled to produce a valid estimate. As there is no gold standard to compare the MUNE techniques against, we have developed a model of the relevant neuromuscular physiology and have used this model to simulate various MUNE techniques. The model allows for a quantitative analysis of candidate MUNE techniques that will hopefully contribute to consensus regarding a standard procedure for performing MUNE.
A simulation study on Bayesian Ridge regression models for several collinearity levels
NASA Astrophysics Data System (ADS)
Efendi, Achmad; Effrihan
2017-12-01
When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.
Davis, Kyle W.; Putnam, Larry D.
2013-01-01
The Ogallala aquifer is an important water resource for the Rosebud Sioux Tribe in Gregory and Tripp Counties in south-central South Dakota and is used for irrigation, public supply, domestic, and stock water supplies. To better understand groundwater flow in the Ogallala aquifer, conceptual and numerical models of groundwater flow were developed for the aquifer. A conceptual model of the Ogallala aquifer was used to analyze groundwater flow and develop a numerical model to simulate groundwater flow in the aquifer. The MODFLOW–NWT model was used to simulate transient groundwater conditions for water years 1985–2009. The model was calibrated using statistical parameter estimation techniques. Potential future scenarios were simulated using the input parameters from the calibrated model for simulations of potential future drought and future increased pumping. Transient simulations were completed with the numerical model. A 200-year transient initialization period was used to establish starting conditions for the subsequent 25-year simulation of water years 1985–2009. The 25-year simulation was discretized into three seasonal stress periods per year and used to simulate transient conditions. A single-layer model was used to simulate flow and mass balance in the Ogallala aquifer with a grid of 133 rows and 282 columns and a uniform spacing of 500 meters (1,640 feet). Regional inflow and outflow were simulated along the western and southern boundaries using specified-head cells. All other boundaries were simulated using no-flow cells. Recharge to the aquifer occurs through precipitation on the outcrop area. Model calibration was accomplished using the Parameter Estimation (PEST) program that adjusted individual model input parameters and assessed the difference between estimated and model-simulated values of hydraulic head and base flow. This program was designed to estimate parameter values that are statistically the most likely set of values to result in the smallest differences between simulated and observed values, within a given set of constraints. The potentiometric surface of the aquifer calculated during the 200-year initialization period established initial conditions for the transient simulation. Water levels for 38 observation wells were used to calibrate the 25-year simulation. Simulated hydraulic heads for the transient simulation were within plus or minus 20 feet of observed values for 95 percent of observation wells, and the mean absolute difference was 5.1 feet. Calibrated hydraulic conductivity ranged from 0.9 to 227 feet per day (ft/d). The annual recharge rates for the transient simulation (water years 1985–2009) ranged from 0.60 to 6.96 inches, with a mean of 3.68 inches for the Ogallala aquifer. This represents a mean recharge rate of 280.5 ft3/s for the model area. Discharge from the aquifer occurs through evapotranspiration, discharge to streams through river leakage and flow from springs and seeps, and well withdrawals. Water is withdrawn from wells for irrigation, public supply, domestic, and stock uses. Simulated mean discharge rates for water years 1985–2009 were about 185 cubic feet per second (ft3/s) for evapotranspiration, 66.7 ft3/s for discharge to streams, and 5.48 ft3/s for well withdrawals. Simulated annual evapotranspiration rates ranged from about 128 to 254 ft3/s, and outflow to streams ranged from 52.2 to 79.9 ft3/s. A sensitivity analysis was used to examine the response of the calibrated model to changes in model parameters for horizontal hydraulic conductivity, recharge, evapotranspiration, and spring and riverbed conductance. The model was most sensitive to recharge and maximum potential evapotranspiration and least sensitive to riverbed and spring conductances. Two potential future scenarios were simulated: a potential drought scenario and a potential increased pumping scenario. To simulate a potential drought scenario, a synthetic drought record was created, the mean of which was equal to 60 percent of the mean estimated recharge rate for the 25-year simulation period. Compared with the results of the calibrated model (non-drought simulation), the simulation representing a potential drought scenario resulted in water-level decreases of as much as 30 feet for the Ogallala aquifer. To simulate the effects of potential future increases in pumping, well withdrawal rates were increased by 50 percent from those estimated for the 25-year simulation period. Compared with the results of the calibrated model, the simulation representing an increased pumping scenario resulted in water-level decreases of as much as 26 feet for the Ogallala aquifer. Groundwater budgets for the potential future scenario simulations were compared with the transient simulation representing water years 1985–2009. The simulation representing a potential drought scenario resulted in lower aquifer recharge from precipitation and decreased discharge from streams, springs, seeps, and evapotranspiration. The simulation representing a potential increased pumping scenario was similar to results from the transient simulation, with a slight increase in well withdrawals and a slight decrease in discharge from river leakage and evapotranspiration. This numerical model is suitable as a tool that could be used to better understand the flow system of the Ogallala aquifer, to approximate hydraulic heads in the aquifer, and to estimate discharge to rivers, springs, and seeps in the study area. The model also is useful to help assess the response of the aquifer to additional stresses, including potential drought conditions and increased well withdrawals.
Dortel, Emmanuelle; Massiot-Granier, Félix; Rivot, Etienne; Million, Julien; Hallier, Jean-Pierre; Morize, Eric; Munaron, Jean-Marie; Bousquet, Nicolas; Chassot, Emmanuel
2013-01-01
Age estimates, typically determined by counting periodic growth increments in calcified structures of vertebrates, are the basis of population dynamics models used for managing exploited or threatened species. In fisheries research, the use of otolith growth rings as an indicator of fish age has increased considerably in recent decades. However, otolith readings include various sources of uncertainty. Current ageing methods, which converts an average count of rings into age, only provide periodic age estimates in which the range of uncertainty is fully ignored. In this study, we describe a hierarchical model for estimating individual ages from repeated otolith readings. The model was developed within a Bayesian framework to explicitly represent the sources of uncertainty associated with age estimation, to allow for individual variations and to include knowledge on parameters from expertise. The performance of the proposed model was examined through simulations, and then it was coupled to a two-stanza somatic growth model to evaluate the impact of the age estimation method on the age composition of commercial fisheries catches. We illustrate our approach using the saggital otoliths of yellowfin tuna of the Indian Ocean collected through large-scale mark-recapture experiments. The simulation performance suggested that the ageing error model was able to estimate the ageing biases and provide accurate age estimates, regardless of the age of the fish. Coupled with the growth model, this approach appeared suitable for modeling the growth of Indian Ocean yellowfin and is consistent with findings of previous studies. The simulations showed that the choice of the ageing method can strongly affect growth estimates with subsequent implications for age-structured data used as inputs for population models. Finally, our modeling approach revealed particularly useful to reflect uncertainty around age estimates into the process of growth estimation and it can be applied to any study relying on age estimation. PMID:23637773
A Simulation Tool for Dynamic Contrast Enhanced MRI
Mauconduit, Franck; Christen, Thomas; Barbier, Emmanuel Luc
2013-01-01
The quantification of bolus-tracking MRI techniques remains challenging. The acquisition usually relies on one contrast and the analysis on a simplified model of the various phenomena that arise within a voxel, leading to inaccurate perfusion estimates. To evaluate how simplifications in the interstitial model impact perfusion estimates, we propose a numerical tool to simulate the MR signal provided by a dynamic contrast enhanced (DCE) MRI experiment. Our model encompasses the intrinsic and relaxations, the magnetic field perturbations induced by susceptibility interfaces (vessels and cells), the diffusion of the water protons, the blood flow, the permeability of the vessel wall to the the contrast agent (CA) and the constrained diffusion of the CA within the voxel. The blood compartment is modeled as a uniform compartment. The different blocks of the simulation are validated and compared to classical models. The impact of the CA diffusivity on the permeability and blood volume estimates is evaluated. Simulations demonstrate that the CA diffusivity slightly impacts the permeability estimates ( for classical blood flow and CA diffusion). The effect of long echo times is investigated. Simulations show that DCE-MRI performed with an echo time may already lead to significant underestimation of the blood volume (up to 30% lower for brain tumor permeability values). The potential and the versatility of the proposed implementation are evaluated by running the simulation with realistic vascular geometry obtained from two photons microscopy and with impermeable cells in the extravascular environment. In conclusion, the proposed simulation tool describes DCE-MRI experiments and may be used to evaluate and optimize acquisition and processing strategies. PMID:23516414
A Reduced Form Model (RFM) is a mathematical relationship between the inputs and outputs of an air quality model, permitting estimation of additional modeling without costly new regional-scale simulations. A 21-year Community Multiscale Air Quality (CMAQ) simulation for the con...
Jeton, Anne E.; Maurer, Douglas K.
2007-01-01
Recent estimates of ground-water inflow to the basin-fill aquifers of Carson Valley, Nevada, and California, from the adjacent Carson Range and Pine Nut Mountains ranged from 22,000 to 40,000 acre-feet per year using water-yield and chloride-balance methods. In this study, watershed models were developed for watersheds with perennial streams and for watersheds with ephemeral streams in the Carson Range and Pine Nut Mountains to provide an independent estimate of ground-water inflow. This report documents the development and calibration of the watershed models, presents model results, compares the results with recent estimates of ground-water inflow to the basin-fill aquifers of Carson Valley, and presents updated estimates of the ground-water budget for basin-fill aquifers of Carson Valley. The model used for the study was the Precipitation-Runoff Modeling System, a physically based, distributed-parameter model designed to simulate precipitation and snowmelt runoff as well as snowpack accumulation and snowmelt processes. Geographic Information System software was used to manage spatial data, characterize model drainages, and to develop Hydrologic Response Units. Models were developed for * Two watersheds with gaged perennial streams in the Carson Range and two watersheds with gaged perennial streams in the Pine Nut Mountains using measured daily mean runoff, * Ten watersheds with ungaged perennial streams using estimated daily mean runoff, * Ten watershed with ungaged ephemeral streams in the Carson Range, and * A large area of ephemeral runoff near the Pine Nut Mountains. Models developed for the gaged watersheds were used as index models to guide the calibration of models for ungaged watersheds. Model calibration was constrained by daily mean runoff for 4 gaged watersheds and for 10 ungaged watersheds in the Carson Range estimated in a previous study. The models were further constrained by annual precipitation volumes estimated in a previous study to provide estimates of ground-water inflow using similar water input. The calibration periods were water years 1990-2002 for watersheds in the Carson Range, and water years 1981-97 for watersheds in the Pine Nut Mountains. Daily mean values for water years 1990-2002 were then simulated using the calibrated watershed models in the Pine Nut Mountains. The daily mean values of precipitation, runoff, evapotranspiration, and ground-water inflow simulated from the watershed models were summed to provide annual mean rates and volumes for each year of the simulations, and mean annual rates and volumes computed for water years 1990-2002. Mean annual bias for the period of record for models of Daggett Creek and Fredericksburg Canyon watersheds, two gaged perennial watersheds in the Carson Range, was within 4 percent and relative errors were about 6 and 12 percent, respectively. Model fit was not as satisfactory for two gaged perennial watersheds, Pine Nut and Buckeye Creeks, in the Pine Nut Mountains. The Pine Nut Creek watershed model had a large negative mean annual bias and a relative error of -11 percent, underestimated runoff for all years but the wet years in the latter part of the record, but adequately simulated the bulk of the spring runoff most of the years. The Buckeye Creek watershed model overestimated mean annual runoff with a relative error of about -5 percent when water year 1994 was removed from the analysis because it had a poor record. The bias and error of the calibrated models were within generally accepted limits for watershed models, indicating the simulated rates and volumes of runoff and ground-water inflow were reasonable. The total mean annual ground-water inflow to Carson Valley computed using estimates simulated by the watershed models was 38,000 acre-feet, including ground-water inflow from Eagle Valley, recharge from precipitation on eolian sand and gravel deposits, and ground-water recharge from precipitation on the western alluvial fans. The estimate was in close agreement with that obtained from the chloride-balance method, 40,000 acre-feet, but was considerably greater than the estimate obtained from the water-yield method, 22,000 acre-feet. The similar estimates obtained from the watershed models and chloride-balance method, two relatively independent methods, provide more confidence that they represent a reasonably accurate volume of ground-water inflow to Carson Valley. However, the two estimates are not completely independent because they use similar distributions of mean annual precipitation. Annual ground-water recharge of the basin-fill aquifers in Carson Valley ranged from 51,000 to 54,000 acre-feet computed using estimates of ground-water inflow to Carson Valley simulated from the watershed models combined with previous estimates of other ground-water budget components. Estimates of mean annual ground-water discharge range from 44,000 to 47,000 acre-feet. The low range estimate for ground-water recharge, 51,000 acre-feet per year, is most similar to the high range estimate for ground-water discharge, 47,000 acre-feet per year. Thus, an average annual volume of about 50,000 acre-feet is a reasonable estimate for mean annual ground-water recharge to and discharge from the basin-fill aquifers in Carson Valley. The results of watershed models indicate that significant interannual variability in the volumes of ground-water inflow is caused by climate variations. During multi-year drought conditions, the watershed simulations indicate that ground-water recharge could be as much as 80 percent less than the mean annual volume of 50,000 acre-feet.
Evaluating Satellite-based Rainfall Estimates for Basin-scale Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Yilmaz, K. K.; Hogue, T. S.; Hsu, K.; Gupta, H. V.; Mahani, S. E.; Sorooshian, S.
2003-12-01
The reliability of any hydrologic simulation and basin outflow prediction effort depends primarily on the rainfall estimates. The problem of estimating rainfall becomes more obvious in basins with scarce or no rain gauges. We present an evaluation of satellite-based rainfall estimates for basin-scale hydrologic modeling with particular interest in ungauged basins. The initial phase of this study focuses on comparison of mean areal rainfall estimates from ground-based rain gauge network, NEXRAD radar Stage-III, and satellite-based PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and their influence on hydrologic model simulations over several basins in the U.S. Six-hourly accumulations of the above competing mean areal rainfall estimates are used as input to the Sacramento Soil Moisture Accounting Model. Preliminary experiments for the Leaf River Basin in Mississippi, for the period of March 2000 - June 2002, reveals that seasonality plays an important role in the comparison. There is an overestimation during the summer and underestimation during the winter in satellite-based rainfall with respect to the competing rainfall estimates. The consequence of this result on the hydrologic model is that simulated discharge underestimates the major observed peak discharges during early spring for the basin under study. Future research will entail developing correction procedures, which depend on different factors such as seasonality, geographic location and basin size, for satellite-based rainfall estimates over basins with dense rain gauge network and/or radar coverage. Extension of these correction procedures to satellite-based rainfall estimates over ungauged basins with similar characteristics has the potential for reducing the input uncertainty in ungauged basin modeling efforts.
Monte Carlo simulation of single accident airport risk profile
NASA Technical Reports Server (NTRS)
1979-01-01
A computer simulation model was developed for estimating the potential economic impacts of a carbon fiber release upon facilities within an 80 kilometer radius of a major airport. The model simulated the possible range of release conditions and the resulting dispersion of the carbon fibers. Each iteration of the model generated a specific release scenario, which would cause a specific amount of dollar loss to the surrounding community. By repeated iterations, a risk profile was generated, showing the probability distribution of losses from one accident. Using accident probability estimates, the risks profile for annual losses was derived. The mechanics are described of the simulation model, the required input data, and the risk profiles generated for the 26 large hub airports.
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston
2009-01-01
Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...
NASA Astrophysics Data System (ADS)
Béranger, Sandra C.; Sleep, Brent E.; Lollar, Barbara Sherwood; Monteagudo, Fernando Perez
2005-01-01
An analytical, one-dimensional, multi-species, reactive transport model for simulating the concentrations and isotopic signatures of tetrachloroethylene (PCE) and its daughter products was developed. The simulation model was coupled to a genetic algorithm (GA) combined with a gradient-based (GB) method to estimate the first order decay coefficients and enrichment factors. In testing with synthetic data, the hybrid GA-GB method reduced the computational requirements for parameter estimation by a factor as great as 300. The isotopic signature profiles were observed to be more sensitive than the concentration profiles to estimates of both the first order decay constants and enrichment factors. Including isotopic data for parameter estimation significantly increased the GA convergence rate and slightly improved the accuracy of estimation of first order decay constants.
Simulation of atmospheric oxidation capacity in Houston, Texas
Air quality model simulations are performed and evaluated for Houston using the Community Multiscale Air Quality (CMAQ) model. The simulations use two different emissions estimates: the EPA 2005 National Emissions Inventory (NEI) and the Texas Commission on Environmental Quality ...
Dual Arm Work Package performance estimates and telerobot task network simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draper, J.V.; Blair, L.M.
1997-02-01
This paper describes the methodology and results of a network simulation study of the Dual Arm Work Package (DAWP), to be employed for dismantling the Argonne National Laboratory CP-5 reactor. The development of the simulation model was based upon the results of a task analysis for the same system. This study was performed by the Oak Ridge National Laboratory (ORNL), in the Robotics and Process Systems Division. Funding was provided the US Department of Energy`s Office of Technology Development, Robotics Technology Development Program (RTDP). The RTDP is developing methods of computer simulation to estimate telerobotic system performance. Data were collectedmore » to provide point estimates to be used in a task network simulation model. Three skilled operators performed six repetitions of a pipe cutting task representative of typical teleoperation cutting operations.« less
APEX model simulation of edge-of-field water quality benefits from upland buffers
USDA-ARS?s Scientific Manuscript database
For maximum usefulness, simulation models must be able to estimate the effectiveness of management practices not represented in the dataset used for model calibration. This study focuses on the ability of the Agricultural Policy Environmental eXtender (APEX) to simulate upland buffer effectiveness f...
Connell, J.F.; Bailey, Z.C.
1989-01-01
A total of 338 single-well aquifer tests from Bear Creek and Melton Valley, Tennessee were statistically grouped to estimate hydraulic conductivities for the geologic formations in the valleys. A cross-sectional simulation model linked to a regression model was used to further refine the statistical estimates for each of the formations and to improve understanding of ground-water flow in Bear Creek Valley. Median hydraulic-conductivity values were used as initial values in the model. Model-calculated estimates of hydraulic conductivity were generally lower than the statistical estimates. Simulations indicate that (1) the Pumpkin Valley Shale controls groundwater flow between Pine Ridge and Bear Creek; (2) all the recharge on Chestnut Ridge discharges to the Maynardville Limestone; (3) the formations having smaller hydraulic gradients may have a greater tendency for flow along strike; (4) local hydraulic conditions in the Maynardville Limestone cause inaccurate model-calculated estimates of hydraulic conductivity; and (5) the conductivity of deep bedrock neither affects the results of the model nor does it add information on the flow system. Improved model performance would require: (1) more water level data for the Copper Ridge Dolomite; (2) improved estimates of hydraulic conductivity in the Copper Ridge Dolomite and Maynardville Limestone; and (3) more water level data and aquifer tests in deep bedrock. (USGS)
General-circulation-model simulations of future snowpack in the western United States
McCabe, G.J.; Wolock, D.M.
1999-01-01
April 1 snowpack accumulations measured at 311 snow courses in the western United States (U.S.) are grouped using a correlation-based cluster analysis. A conceptual snow accumulation and melt model and monthly temperature and precipitation for each cluster are used to estimate cluster-average April 1 snowpack. The conceptual snow model is subsequently used to estimate future snowpack by using changes in monthly temperature and precipitation simulated by the Canadian Centre for Climate Modeling and Analysis (CCC) and the Hadley Centre for Climate Prediction and Research (HADLEY) general circulation models (GCMs). Results for the CCC model indicate that although winter precipitation is estimated to increase in the future, increases in temperatures will result in large decreases in April 1 snowpack for the entire western US. Results for the HADLEY model also indicate large decreases in April 1 snowpack for most of the western US, but the decreases are not as severe as those estimated using the CCC simulations. Although snowpack conditions are estimated to decrease for most areas of the western US, both GCMs estimate a general increase in winter precipitation toward the latter half of the next century. Thus, water quantity may be increased in the western US; however, the timing of runoff will be altered because precipitation will more frequently occur as rain rather than as snow.
Discrete event simulation: the preferred technique for health economic evaluations?
Caro, Jaime J; Möller, Jörgen; Getsios, Denis
2010-12-01
To argue that discrete event simulation should be preferred to cohort Markov models for economic evaluations in health care. The basis for the modeling techniques is reviewed. For many health-care decisions, existing data are insufficient to fully inform them, necessitating the use of modeling to estimate the consequences that are relevant to decision-makers. These models must reflect what is known about the problem at a level of detail sufficient to inform the questions. Oversimplification will result in estimates that are not only inaccurate, but potentially misleading. Markov cohort models, though currently popular, have so many limitations and inherent assumptions that they are inadequate to inform most health-care decisions. An event-based individual simulation offers an alternative much better suited to the problem. A properly designed discrete event simulation provides more accurate, relevant estimates without being computationally prohibitive. It does require more data and may be a challenge to convey transparently, but these are necessary trade-offs to provide meaningful and valid results. In our opinion, discrete event simulation should be the preferred technique for health economic evaluations today. © 2010, International Society for Pharmacoeconomics and Outcomes Research (ISPOR).
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Assimilation of Ocean-Color Plankton Functional Types to Improve Marine Ecosystem Simulations
NASA Astrophysics Data System (ADS)
Ciavatta, S.; Brewin, R. J. W.; Skákala, J.; Polimene, L.; de Mora, L.; Artioli, Y.; Allen, J. I.
2018-02-01
We assimilated phytoplankton functional types (PFTs) derived from ocean color into a marine ecosystem model, to improve the simulation of biogeochemical indicators and emerging properties in a shelf sea. Error-characterized chlorophyll concentrations of four PFTs (diatoms, dinoflagellates, nanoplankton, and picoplankton), as well as total chlorophyll for comparison, were assimilated into a physical-biogeochemical model of the North East Atlantic, applying a localized Ensemble Kalman filter. The reanalysis simulations spanned the years 1998-2003. The skill of the reference and reanalysis simulations in estimating ocean color and in situ biogeochemical data were compared by using robust statistics. The reanalysis outperformed both the reference and the assimilation of total chlorophyll in estimating the ocean-color PFTs (except nanoplankton), as well as the not-assimilated total chlorophyll, leading the model to simulate better the plankton community structure. Crucially, the reanalysis improved the estimates of not-assimilated in situ data of PFTs, as well as of phosphate and pCO2, impacting the simulation of the air-sea carbon flux. However, the reanalysis increased further the model overestimation of nitrate, in spite of increases in plankton nitrate uptake. The method proposed here is easily adaptable for use with other ecosystem models that simulate PFTs, for, e.g., reanalysis of carbon fluxes in the global ocean and for operational forecasts of biogeochemical indicators in shelf-sea ecosystems.
Fire spread estimation on forest wildfire using ensemble kalman filter
NASA Astrophysics Data System (ADS)
Syarifah, Wardatus; Apriliani, Erna
2018-04-01
Wildfire is one of the most frequent disasters in the world, for example forest wildfire, causing population of forest decrease. Forest wildfire, whether naturally occurring or prescribed, are potential risks for ecosystems and human settlements. These risks can be managed by monitoring the weather, prescribing fires to limit available fuel, and creating firebreaks. With computer simulations we can predict and explore how fires may spread. The model of fire spread on forest wildfire was established to determine the fire properties. The fire spread model is prepared based on the equation of the diffusion reaction model. There are many methods to estimate the spread of fire. The Kalman Filter Ensemble Method is a modified estimation method of the Kalman Filter algorithm that can be used to estimate linear and non-linear system models. In this research will apply Ensemble Kalman Filter (EnKF) method to estimate the spread of fire on forest wildfire. Before applying the EnKF method, the fire spread model will be discreted using finite difference method. At the end, the analysis obtained illustrated by numerical simulation using software. The simulation results show that the Ensemble Kalman Filter method is closer to the system model when the ensemble value is greater, while the covariance value of the system model and the smaller the measurement.
Composing problem solvers for simulation experimentation: a case study on steady state estimation.
Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M
2014-01-01
Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.
Boskova, Veronika; Bonhoeffer, Sebastian; Stadler, Tanja
2014-01-01
Quantifying epidemiological dynamics is crucial for understanding and forecasting the spread of an epidemic. The coalescent and the birth-death model are used interchangeably to infer epidemiological parameters from the genealogical relationships of the pathogen population under study, which in turn are inferred from the pathogen genetic sequencing data. To compare the performance of these widely applied models, we performed a simulation study. We simulated phylogenetic trees under the constant rate birth-death model and the coalescent model with a deterministic exponentially growing infected population. For each tree, we re-estimated the epidemiological parameters using both a birth-death and a coalescent based method, implemented as an MCMC procedure in BEAST v2.0. In our analyses that estimate the growth rate of an epidemic based on simulated birth-death trees, the point estimates such as the maximum a posteriori/maximum likelihood estimates are not very different. However, the estimates of uncertainty are very different. The birth-death model had a higher coverage than the coalescent model, i.e. contained the true value in the highest posterior density (HPD) interval more often (2–13% vs. 31–75% error). The coverage of the coalescent decreases with decreasing basic reproductive ratio and increasing sampling probability of infecteds. We hypothesize that the biases in the coalescent are due to the assumption of deterministic rather than stochastic population size changes. Both methods performed reasonably well when analyzing trees simulated under the coalescent. The methods can also identify other key epidemiological parameters as long as one of the parameters is fixed to its true value. In summary, when using genetic data to estimate epidemic dynamics, our results suggest that the birth-death method will be less sensitive to population fluctuations of early outbreaks than the coalescent method that assumes a deterministic exponentially growing infected population. PMID:25375100
Estimating and validating harvesting system production through computer simulation
John E. Baumgras; Curt C. Hassler; Chris B. LeDoux
1993-01-01
A Ground Based Harvesting System Simulation model (GB-SIM) has been developed to estimate stump-to-truck production rates and multiproduct yields for conventional ground-based timber harvesting systems in Appalachian hardwood stands. Simulation results reflect inputs that define harvest site and timber stand attributes, wood utilization options, and key attributes of...
Grieger, Jessica A; Johnson, Brittany J; Wycherley, Thomas P; Golley, Rebecca K
2017-05-01
Background: Dietary simulation modeling can predict dietary strategies that may improve nutritional or health outcomes. Objectives: The study aims were to undertake a systematic review of simulation studies that model dietary strategies aiming to improve nutritional intake, body weight, and related chronic disease, and to assess the methodologic and reporting quality of these models. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided the search strategy with studies located through electronic searches [Cochrane Library, Ovid (MEDLINE and Embase), EBSCOhost (CINAHL), and Scopus]. Study findings were described and dietary modeling methodology and reporting quality were critiqued by using a set of quality criteria adapted for dietary modeling from general modeling guidelines. Results: Forty-five studies were included and categorized as modeling moderation, substitution, reformulation, or promotion dietary strategies. Moderation and reformulation strategies targeted individual nutrients or foods to theoretically improve one particular nutrient or health outcome, estimating small to modest improvements. Substituting unhealthy foods with healthier choices was estimated to be effective across a range of nutrients, including an estimated reduction in intake of saturated fatty acids, sodium, and added sugar. Promotion of fruits and vegetables predicted marginal changes in intake. Overall, the quality of the studies was moderate to high, with certain features of the quality criteria consistently reported. Conclusions: Based on the results of reviewed simulation dietary modeling studies, targeting a variety of foods rather than individual foods or nutrients theoretically appears most effective in estimating improvements in nutritional intake, particularly reducing intake of nutrients commonly consumed in excess. A combination of strategies could theoretically be used to deliver the best improvement in outcomes. Study quality was moderate to high. However, given the lack of dietary simulation reporting guidelines, future work could refine the quality tool to harmonize consistency in the reporting of subsequent dietary modeling studies. © 2017 American Society for Nutrition.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Ubani, Nora
2016-05-01
The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
Laurence, Caroline O; Heywood, Troy; Bell, Janice; Atkinson, Kaye; Karnon, Jonathan
2018-03-27
Health workforce planning models have been developed to estimate the future health workforce requirements for a population whom they serve and have been used to inform policy decisions. To adapt and further develop a need-based GP workforce simulation model to incorporate current and estimated geographic distribution of patients and GPs. A need-based simulation model that estimates the supply of GPs and levels of services required in South Australia (SA) was adapted and applied to the Western Australian (WA) workforce. The main outcome measure was the differences in the number of full-time equivalent (FTE) GPs supplied and required from 2013 to 2033. The base scenario estimated a shortage of GPs in WA from 2019 onwards with a shortage of 493 FTE GPs in 2033, while for SA, estimates showed an oversupply over the projection period. The WA urban and rural models estimated an urban shortage of GPs over this period. A reduced international medical graduate recruitment scenario resulted in estimated shortfalls of GPs by 2033 for WA and SA. The WA-specific scenarios of lower population projections and registrar work value resulted in a reduced shortage of FTE GPs in 2033, while unfilled training places increased the shortfall of FTE GPs in 2033. The simulation model incorporates contextual differences to its structure that allows within and cross jurisdictional comparisons of workforce estimations. It also provides greater insights into the drivers of supply and demand and the impact of changes in workforce policy, promoting more informed decision-making.
Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo
2017-12-01
The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.
Estimating and validating ground-based timber harvesting production through computer simulation
Jingxin Wang; Chris B. LeDoux
2003-01-01
Estimating ground-based timber harvesting systems production with an object oriented methodology was investigated. The estimation model developed generates stands of trees, simulates chain saw, drive-to-tree feller-buncher, swing-to-tree single-grip harvester felling, and grapple skidder and forwarder extraction activities, and analyzes costs and productivity. It also...
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Growth modeling of Listeria monocytogenes in pasteurized liquid egg.
Ohkochi, Miho; Koseki, Shigenobu; Kunou, Masaaki; Sugiura, Katsuaki; Tsubone, Hirokazu
2013-09-01
The growth kinetics of Listeria monocytogenes and natural flora in commercially produced pasteurized liquid egg was examined at 4.1 to 19.4°C, and a growth simulation model that can estimate the range of the number of L. monocytogenes bacteria was developed. The experimental kinetic data were fitted to the Baranyi model, and growth parameters, such as maximum specific growth rate (μ(max)), maximum population density (N(max)), and lag time (λ), were estimated. As a result of estimating these parameters, we found that L. monocytogenes can grow without spoilage below 12.2°C, and we then focused on storage temperatures below 12.2°C in developing our secondary models. The temperature dependency of the μ(max) was described by Ratkowsky's square root model. The N(max) of L. monocytogenes was modeled as a function of temperature, because the N(max) of L. monocytogenes decreased as storage temperature increased. A tertiary model of L. monocytogenes was developed using the Baranyi model and μ(max) and N(max) secondary models. The ranges of the numbers of L. monocytogenes bacteria were simulated using Monte Carlo simulations with an assumption that these parameters have variations that follow a normal distribution. Predictive simulations under both constant and fluctuating temperature conditions demonstrated a high accuracy, represented by root mean square errors of 0.44 and 0.34, respectively. The predicted ranges also seemed to show a reasonably good estimation, with 55.8 and 51.5% of observed values falling into the prediction range of the 25th to 75th percentile, respectively. These results suggest that the model developed here can be used to estimate the kinetics and range of L. monocytogenes growth in pasteurized liquid egg under refrigerated temperature.
2016-07-21
constants. The model (2.42) is popular for simulation of the UAV motion [60], [61], [62] due to the fact that it models the aircraft response to...inputs to the dynamic model (2.42). The concentration sensors onboard the UAV record concentration ( simulated ) data according to its spatial location...vehicle dynamics and guidance, and the onboard sensor modeling . 15. SUBJECT TERMS State estimation; UAVs , mobile sensors; grid adaptationj; plume
New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Lung, Shun-Fat
2017-01-01
A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.
A Simulation Study on Methods of Correcting for the Effects of Extreme Response Style
ERIC Educational Resources Information Center
Wetzel, Eunike; Böhnke, Jan R.; Rose, Norman
2016-01-01
The impact of response styles such as extreme response style (ERS) on trait estimation has long been a matter of concern to researchers and practitioners. This simulation study investigated three methods that have been proposed for the correction of trait estimates for ERS effects: (a) mixed Rasch models, (b) multidimensional item response models,…
Benjamin Wang; Robert E. Manning; Steven R. Lawson; William A. Valliere
2001-01-01
Recent research and management experience has led to several frameworks for defining and managing carrying capacity of national parks and related areas. These frameworks rely on monitoring indicator variables to ensure that standards of quality are maintained. The objective of this study was to develop a computer simulation model to estimate the relationships between...
Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.
2010-01-01
Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling
ERIC Educational Resources Information Center
Babcock, Ben
2011-01-01
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Using satellite-based rainfall estimates for streamflow modelling: Bagmati Basin
Shrestha, M.S.; Artan, Guleid A.; Bajracharya, S.R.; Sharma, R. R.
2008-01-01
In this study, we have described a hydrologic modelling system that uses satellite-based rainfall estimates and weather forecast data for the Bagmati River Basin of Nepal. The hydrologic model described is the US Geological Survey (USGS) Geospatial Stream Flow Model (GeoSFM). The GeoSFM is a spatially semidistributed, physically based hydrologic model. We have used the GeoSFM to estimate the streamflow of the Bagmati Basin at Pandhera Dovan hydrometric station. To determine the hydrologic connectivity, we have used the USGS Hydro1k DEM dataset. The model was forced by daily estimates of rainfall and evapotranspiration derived from weather model data. The rainfall estimates used for the modelling are those produced by the National Oceanic and Atmospheric Administration Climate Prediction Centre and observed at ground rain gauge stations. The model parameters were estimated from globally available soil and land cover datasets – the Digital Soil Map of the World by FAO and the USGS Global Land Cover dataset. The model predicted the daily streamflow at Pandhera Dovan gauging station. The comparison of the simulated and observed flows at Pandhera Dovan showed that the GeoSFM model performed well in simulating the flows of the Bagmati Basin.
Statistical methods for incomplete data: Some results on model misspecification.
McIsaac, Michael; Cook, R J
2017-02-01
Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.15)
NASA Astrophysics Data System (ADS)
Long, A. J.
2015-03-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, spring flow, groundwater level, or solute transport for a measurement point in response to a system input of precipitation, recharge, or solute injection. I introduce the first version of RRAWFLOW available for download and public use and describe additional options. The open-source code is written in the R language and is available at http://sd.water.usgs.gov/projects/RRAWFLOW/RRAWFLOW.html along with an example model of streamflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution, i.e., the unit-hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Several options are included to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications (e.g., estimating missing periods in a hydrologic record). RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
Long, Andrew J.; Putnam, Larry D.
2010-01-01
The Ogallala and Arikaree aquifers are important water resources in the Rosebud Indian Reservation area and are used extensively for irrigation, municipal, and domestic water supplies. Drought or increased withdrawals from the Ogallala and Arikaree aquifers in the Rosebud Indian Reservation area have the potential to affect water levels in these aquifers. This report documents revisions and recalibration of a previously published three-dimensional, numerical groundwater-flow model for this area. Data for a 30-year period (water years 1979 through 2008) were used in steady-state and transient numerical simulations of groundwater flow. In the revised model, revisions include (1) extension of the transient calibration period by 10 years, (2) the use of inverse modeling for steady-state calibration, (3) model calibration to base flow for an additional four surface-water drainage basins, (4) improved estimation of transient aquifer recharge, (5) improved delineation of vegetation types, and (6) reduced cell size near large capacity water-supply wells. In addition, potential future scenarios were simulated to assess the potential effects of drought and increased groundwater withdrawals.The model comprised two layers: the upper layer represented the Ogallala aquifer and the lower layer represented the Arikaree aquifer. The model’s grid had 168 rows and 202 columns, most of which were 1,640 feet (500 meters) wide, with narrower rows and columns near large water-supply wells. Recharge to the Ogallala and Arikaree aquifers occurs from precipitation on the outcrop areas. The average recharge rates used for the steady-state simulation were 2.91 and 1.45 inches per year for the Ogallala aquifer and Arikaree aquifer, respectively, for a total rate of 255.4 cubic feet per second (ft3/s). Discharge from the aquifers occurs through evapotranspiration, discharge to streams as base flow and spring flow, and well withdrawals. Discharge rates for the steady-state simulation were 171.3 ft3/s for evapotranspiration, 74.4 ft3/s for net outflow to streams and springs, and 11.6 ft3/s for well withdrawals. Estimated horizontal hydraulic conductivity used for the numerical model ranged from 0.2 to 84.4 feet per day (ft/d) in the Ogallala aquifer and from 0.1 to 4.3 ft/d in the Arikaree aquifer. A uniform vertical hydraulic conductivity value of 4.2x10-4 ft/d was estimated for the Ogallala aquifer. Vertical hydraulic conductivity was estimated for five zones in the Arikaree aquifer and ranged from 8.8x10-5 to 3.7 ft/d. Average rates of recharge, maximum evapotranspiration, and well withdrawals were included in the steady-state simulation, whereas the time-varying rates were included in the transient simulation.Inverse modeling techniques were used for steady-state model calibration. These methods were designed to estimate parameter values that are, statistically, the most likely set of values to result in the smallest differences between simulated and observed hydraulic heads and base-flow discharges. For the steady-state simulation, the root mean square error for simulated hydraulic heads for all 383 wells was 27.3 feet. Simulated hydraulic heads were within ±50 feet of observed values for 93 percent of the wells. The potentiometric surfaces of the two aquifers calculated by the steady-state simulation established initial conditions for the transient simulation. For the transient simulation, the difference between the simulated and observed means for hydrographs was within ±40 feet for 98 percent of 44 observation wells.A sensitivity analysis was used to examine the response of the calibrated steady-state model to changes in model parameters including horizontal and vertical hydraulic conductivity, evapotranspiration, recharge, and riverbed conductance. The model was most sensitive to recharge and maximum evapotranspiration and least sensitive to riverbed and spring conductances.To simulate a potential future drought scenario, a synthetic recharge record was created, the mean of which was equal to 64 percent of the average estimated recharge rate for the 30-year calibration period. This synthetic recharge record was used to simulate the last 20 years of the calibration period under drought conditions. Compared with results of the calibrated model, decreases in hydraulic-head values for the drought scenario at the end of the simulation period were as much as 39 feet for the Ogallala aquifer. To simulate the effects of potential increases in pumping, well withdrawal rates were increased by 50 percent from those estimated for the 30-year calibration period for the last 20 years of the calibration period. Compared with results of the calibrated model, decreases in hydraulic-head values for the scenario of increased pumping at the end of the simulation period were as much as 13 feet for the Ogallala aquifer.This numerical model is suitable as a tool to help understand the flow system, to help confirm that previous estimates of aquifer properties were reasonable, and to estimate aquifer properties in areas without data. The model also is useful to help assess the effects of drought and increases in pumping by simulations of these scenarios, the results of which are not precise but may be considered when making water management decisions.
Analysis of longitudinal marginal structural models.
Bryan, Jenny; Yu, Zhuo; Van Der Laan, Mark J
2004-07-01
In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.
Estimating short-period dynamics using an extended Kalman filter
NASA Technical Reports Server (NTRS)
Bauer, Jeffrey E.; Andrisani, Dominick
1990-01-01
An extended Kalman filter (EKF) is used to estimate the parameters of a low-order model from aircraft transient response data. The low-order model is a state space model derived from the short-period approximation of the longitudinal aircraft dynamics. The model corresponds to the pitch rate to stick force transfer function currently used in flying qualities analysis. Because of the model chosen, handling qualities information is also obtained. The parameters are estimated from flight data as well as from a six-degree-of-freedom, nonlinear simulation of the aircraft. These two estimates are then compared and the discrepancies noted. The low-order model is able to satisfactorily match both flight data and simulation data from a high-order computer simulation. The parameters obtained from the EKF analysis of flight data are compared to those obtained using frequency response analysis of the flight data. Time delays and damping ratios are compared and are in agreement. This technique demonstrates the potential to determine, in near real time, the extent of differences between computer models and the actual aircraft. Precise knowledge of these differences can help to determine the flying qualities of a test aircraft and lead to more efficient envelope expansion.
A New Estimate of North American Mountain Snow Accumulation From Regional Climate Model Simulations
NASA Astrophysics Data System (ADS)
Wrzesien, Melissa L.; Durand, Michael T.; Pavelsky, Tamlin M.; Kapnick, Sarah B.; Zhang, Yu; Guo, Junyi; Shum, C. K.
2018-02-01
Despite the importance of mountain snowpack to understanding the water and energy cycles in North America's montane regions, no reliable mountain snow climatology exists for the entire continent. We present a new estimate of mountain snow water equivalent (SWE) for North America from regional climate model simulations. Climatological peak SWE in North America mountains is 1,006 km3, 2.94 times larger than previous estimates from reanalyses. By combining this mountain SWE value with the best available global product in nonmountain areas, we estimate peak North America SWE of 1,684 km3, 55% greater than previous estimates. In our simulations, the date of maximum SWE varies widely by mountain range, from early March to mid-April. Though mountains comprise 24% of the continent's land area, we estimate that they contain 60% of North American SWE. This new estimate is a suitable benchmark for continental- and global-scale water and energy budget studies.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Simulating and validating coastal gradients in wind energy resources
NASA Astrophysics Data System (ADS)
Hahmann, Andrea; Floors, Rogier; Karagali, Ioanna; Vasiljevic, Nikola; Lea, Guillaume; Simon, Elliot; Courtney, Michael; Badger, Merete; Peña, Alfredo; Hasager, Charlotte
2016-04-01
The experimental campaign of the RUNE (Reducing Uncertainty of Near-shore wind resource Estimates) project took place on the western coast of Denmark during the winter 2015-2016. The campaign used onshore scanning lidar technology combined with ocean and satellite information and produced a unique dataset to study the transition in boundary layer dynamics across the coastal zone. The RUNE project aims at reducing the uncertainty of near-shore wind resource estimates produced by mesoscale modeling. With this in mind, simulations using the Weather Research and Forecasting (WRF) model were performed to identify the sensitivity in the coastal gradients of wind energy resources to various model parameters and model inputs. Among these: model horizontal grid spacing and the planetary boundary layer and surface-layer scheme. We report on the differences amongst these simulations and preliminary results on the comparison of the model simulations with the RUNE observations of lidar and satellite measurements and near coastal tall mast.
Vezzaro, L; Sharma, A K; Ledin, A; Mikkelsen, P S
2015-03-15
The estimation of micropollutant (MP) fluxes in stormwater systems is a fundamental prerequisite when preparing strategies to reduce stormwater MP discharges to natural waters. Dynamic integrated models can be important tools in this step, as they can be used to integrate the limited data provided by monitoring campaigns and to evaluate the performance of different strategies based on model simulation results. This study presents an example where six different control strategies, including both source-control and end-of-pipe treatment, were compared. The comparison focused on fluxes of heavy metals (copper, zinc) and organic compounds (fluoranthene). MP fluxes were estimated by using an integrated dynamic model, in combination with stormwater quality measurements. MP sources were identified by using GIS land usage data, runoff quality was simulated by using a conceptual accumulation/washoff model, and a stormwater retention pond was simulated by using a dynamic treatment model based on MP inherent properties. Uncertainty in the results was estimated with a pseudo-Bayesian method. Despite the great uncertainty in the MP fluxes estimated by the runoff quality model, it was possible to compare the six scenarios in terms of discharged MP fluxes, compliance with water quality criteria, and sediment accumulation. Source-control strategies obtained better results in terms of reduction of MP emissions, but all the simulated strategies failed in fulfilling the criteria based on emission limit values. The results presented in this study shows how the efficiency of MP pollution control strategies can be quantified by combining advanced modeling tools (integrated stormwater quality model, uncertainty calibration). Copyright © 2014 Elsevier Ltd. All rights reserved.
40 CFR Appendix C to Part 75 - Missing Data Estimation Procedures
Code of Federal Regulations, 2010 CFR
2010-07-01
... certification of a parametric, empirical, or process simulation method or model for calculating substitute data... available process simulation methods and models. 1.2Petition Requirements Continuously monitor, determine... desulfurization, a corresponding empirical correlation or process simulation parametric method using appropriate...
Yi, S.; Li, N.; Xiang, B.; Wang, X.; Ye, B.; McGuire, A.D.
2013-01-01
Soil surface temperature is a critical boundary condition for the simulation of soil temperature by environmental models. It is influenced by atmospheric and soil conditions and by vegetation cover. In sophisticated land surface models, it is simulated iteratively by solving surface energy budget equations. In ecosystem, permafrost, and hydrology models, the consideration of soil surface temperature is generally simple. In this study, we developed a methodology for representing the effects of vegetation cover and atmospheric factors on the estimation of soil surface temperature for alpine grassland ecosystems on the Qinghai-Tibetan Plateau. Our approach integrated measurements from meteorological stations with simulations from a sophisticated land surface model to develop an equation set for estimating soil surface temperature. After implementing this equation set into an ecosystem model and evaluating the performance of the ecosystem model in simulating soil temperature at different depths in the soil profile, we applied the model to simulate interactions among vegetation cover, freeze-thaw cycles, and soil erosion to demonstrate potential applications made possible through the implementation of the methodology developed in this study. Results showed that (1) to properly estimate daily soil surface temperature, algorithms should use air temperature, downward solar radiation, and vegetation cover as independent variables; (2) the equation set developed in this study performed better than soil surface temperature algorithms used in other models; and (3) the ecosystem model performed well in simulating soil temperature throughout the soil profile using the equation set developed in this study. Our application of the model indicates that the representation in ecosystem models of the effects of vegetation cover on the simulation of soil thermal dynamics has the potential to substantially improve our understanding of the vulnerability of alpine grassland ecosystems to changes in climate and grazing regimes.
NASA Astrophysics Data System (ADS)
Yi, S.; Li, N.; Xiang, B.; Wang, X.; Ye, B.; McGuire, A. D.
2013-07-01
surface temperature is a critical boundary condition for the simulation of soil temperature by environmental models. It is influenced by atmospheric and soil conditions and by vegetation cover. In sophisticated land surface models, it is simulated iteratively by solving surface energy budget equations. In ecosystem, permafrost, and hydrology models, the consideration of soil surface temperature is generally simple. In this study, we developed a methodology for representing the effects of vegetation cover and atmospheric factors on the estimation of soil surface temperature for alpine grassland ecosystems on the Qinghai-Tibetan Plateau. Our approach integrated measurements from meteorological stations with simulations from a sophisticated land surface model to develop an equation set for estimating soil surface temperature. After implementing this equation set into an ecosystem model and evaluating the performance of the ecosystem model in simulating soil temperature at different depths in the soil profile, we applied the model to simulate interactions among vegetation cover, freeze-thaw cycles, and soil erosion to demonstrate potential applications made possible through the implementation of the methodology developed in this study. Results showed that (1) to properly estimate daily soil surface temperature, algorithms should use air temperature, downward solar radiation, and vegetation cover as independent variables; (2) the equation set developed in this study performed better than soil surface temperature algorithms used in other models; and (3) the ecosystem model performed well in simulating soil temperature throughout the soil profile using the equation set developed in this study. Our application of the model indicates that the representation in ecosystem models of the effects of vegetation cover on the simulation of soil thermal dynamics has the potential to substantially improve our understanding of the vulnerability of alpine grassland ecosystems to changes in climate and grazing regimes.
Multiple imputation for handling missing outcome data when estimating the relative risk.
Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B
2017-09-06
Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
An Evaluation of Hierarchical Bayes Estimation for the Two- Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item parameters. Simulated data sets were analyzed using two different Bayes estimation procedures, the two-stage hierarchical Bayes estimation (HB2) and the marginal Bayesian with known hyperparameters (MB), and marginal maximum…
NASA Astrophysics Data System (ADS)
Pujos, Cyril; Regnier, Nicolas; Mousseau, Pierre; Defaye, Guy; Jarny, Yvon
2007-05-01
Simulation quality is determined by the knowledge of the parameters of the model. Yet the rheological models for polymer are often not very accurate, since the viscosity measurements are made under approximations as homogeneous temperature and empirical corrections as Bagley one. Furthermore rheological behaviors are often traduced by mathematical laws as the Cross or the Carreau-Yasuda ones, whose parameters are fitted from viscosity values, obtained with corrected experimental data, and not appropriate for each polymer. To correct these defaults, a table-like rheological model is proposed. This choice makes easier the estimation of model parameters, since each parameter has the same order of magnitude. As the mathematical shape of the model is not imposed, the estimation process is appropriate for each polymer. The proposed method consists in minimizing the quadratic norm of the difference between calculated variables and measured data. In this study an extrusion die is simulated, in order to provide us temperature along the extrusion channel, pressure and flow references. These data allow to characterize thermal transfers and flow phenomena, in which the viscosity is implied. Furthermore the different natures of data allow to estimate viscosity for a large range of shear rates. The estimated rheological model improves the agreement between measurements and simulation: for numerical cases, the error on the flow becomes less than 0.1% for non-Newtonian rheology. This method couples measurements and simulation, constitutes a very accurate mean of rheology determination, and allows to improve the prediction abilities of the model.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Effects of linking a soil-water-balance model with a groundwater-flow model
Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.
2013-01-01
A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.
Determining wave direction using curvature parameters.
de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista
2016-01-01
The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.
Hirai, Toshinori; Kimura, Toshimi; Echizen, Hirotoshi
2016-01-01
Whether renal dysfunction influences the hypouricemic effect of febuxostat, a xanthine oxidase (XO) inhibitor, in patients with hyperuricemia due to overproduction or underexcretion of uric acid (UA) remains unclear. We aimed to address this question with a modeling and simulation approach. The pharmacokinetics (PK) of febuxostat were analyzed using data from the literature. A kinetic model of UA was retrieved from a previous human study. Renal UA clearance was estimated as a function of creatinine clearance (CLcr) but non-renal UA clearance was assumed constant. A reversible inhibition model for bovine XO was adopted. Integrating these kinetic formulas, we developed a PK-pharmacodynamic (PK-PD) model for estimating the time course of the hypouricemic effect of febuxostat as a function of baseline UA level, febuxostat dose, treatment duration, body weight, and CLcr. Using the Monte Carlo simulation method, we examined the performance of the model by comparing predicted UA levels with those reported in the literature. We also modified the models for application to hyperuricemia due to UA overproduction or underexcretion. Thirty-nine data sets comprising 735 volunteers or patients were retrieved from the literature. A good correlation was observed between the hypouricemic effects of febuxostat estimated by our PK-PD model and those reported in the articles (observed) (r=0.89, p<0.001). The hypouricemic effect was estimated to be augmented in patients with renal dysfunction irrespective of the etiology of hyperuricemia. While validation in clinical studies is needed, the modeling and simulation approach may be useful for individualizing febuxostat doses in patients with various clinical characteristics.
Faugeras, Blaise; Maury, Olivier
2005-10-01
We develop an advection-diffusion size-structured fish population dynamics model and apply it to simulate the skipjack tuna population in the Indian Ocean. The model is fully spatialized, and movements are parameterized with oceanographical and biological data; thus it naturally reacts to environment changes. We first formulate an initial-boundary value problem and prove existence of a unique positive solution. We then discuss the numerical scheme chosen for the integration of the simulation model. In a second step we address the parameter estimation problem for such a model. With the help of automatic differentiation, we derive the adjoint code which is used to compute the exact gradient of a Bayesian cost function measuring the distance between the outputs of the model and catch and length frequency data. A sensitivity analysis shows that not all parameters can be estimated from the data. Finally twin experiments in which pertubated parameters are recovered from simulated data are successfully conducted.
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
NASA Astrophysics Data System (ADS)
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
A Global System for Transportation Simulation and Visualization in Emergency Evacuation Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Wei; Liu, Cheng; Thomas, Neil
2015-01-01
Simulation-based studies are frequently used for evacuation planning and decision making processes. Given the transportation systems complexity and data availability, most evacuation simulation models focus on certain geographic areas. With routine improvement of OpenStreetMap road networks and LandScanTM global population distribution data, we present WWEE, a uniform system for world-wide emergency evacuation simulations. WWEE uses unified data structure for simulation inputs. It also integrates a super-node trip distribution model as the default simulation parameter to improve the system computational performance. Two levels of visualization tools are implemented for evacuation performance analysis, including link-based macroscopic visualization and vehicle-based microscopic visualization. Formore » left-hand and right-hand traffic patterns in different countries, the authors propose a mirror technique to experiment with both scenarios without significantly changing traffic simulation models. Ten cities in US, Europe, Middle East, and Asia are modeled for demonstration. With default traffic simulation models for fast and easy-to-use evacuation estimation and visualization, WWEE also retains the capability of interactive operation for users to adopt customized traffic simulation models. For the first time, WWEE provides a unified platform for global evacuation researchers to estimate and visualize their strategies performance of transportation systems under evacuation scenarios.« less
Guillermo A. Mendoza; Roger J. Meimban; Philip A. Araman; William G. Luppold
1991-01-01
A log inventory model and a real-time hardwood process simulation model were developed and combined into an integrated production planning and control system for hardwood sawmills. The log inventory model was designed to monitor and periodically update the status of the logs in the log yard. The process simulation model was designed to estimate various sawmill...
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.11)
NASA Astrophysics Data System (ADS)
Long, A. J.
2014-09-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, springflow, groundwater level, solute transport, or cave drip for a measurement point in response to a system input of precipitation, recharge, or solute injection. The RRAWFLOW open-source code is written in the R language and is included in the Supplement to this article along with an example model of springflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution; i.e., the unit hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Other options include the use of user-defined IRFs and different methods to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications. RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
Starn, J. Jeffrey; Stone, Janet Radway; Mullaney, John R.
2000-01-01
Contributing areas to public-supply wells at the Southbury Training School in Southbury, Connecticut, were mapped by simulating ground-water flow in stratified glacial deposits in the lower Transylvania Brook watershed. The simulation used nonlinear regression methods and informational statistics to estimate parameters of a ground-water flow model using drawdown data from an aquifer test. The goodness of fit of the model and the uncertainty associated with model predictions were statistically measured. A watershed-scale model, depicting large-scale ground-water flow in the Transylvania Brook watershed, was used to estimate the distribution of groundwater recharge. Estimates of recharge from 10 small basins in the watershed differed on the basis of the drainage characteristics of each basin. Small basins having well-defined stream channels contributed less ground-water recharge than basins having no defined channels because potential ground-water recharge was carried away in the stream channel. Estimates of ground-water recharge were used in an aquifer-scale parameter-estimation model. Seven variations of the ground-water-flow system were posed, each representing the ground-water-flow system in slightly different but realistic ways. The model that most closely reproduced measured hydraulic heads and flows with realistic parameter values was selected as the most representative of the ground-water-flow system and was used to delineate boundaries of the contributing areas. The model fit revealed no systematic model error, which indicates that the model is likely to represent the major characteristics of the actual system. Parameter values estimated during the simulation are as follows: horizontal hydraulic conductivity of coarse-grained deposits, 154 feet per day; vertical hydraulic conductivity of coarse-grained deposits, 0.83 feet per day; horizontal hydraulic conductivity of fine-grained deposits, 29 feet per day; specific yield, 0.007; specific storage, 1.6E-05. Average annual recharge was estimated using the watershed-scale model with no parameter estimation and was determined to be 24 inches per year in the valley areas and 9 inches per year in the upland areas. The parameter estimates produced in the model are similar to expected values, with two exceptions. The estimated specific yield of the stratified glacial deposits is lower than expected, which could be caused by the layered nature of the deposits. The recharge estimate produced by the model was also lower?about 32 percent of the average annual rate. This could be caused by the timing of the aquifer test with respect to the annual cycle of ground-water recharge, and by some of the expected recharge going to parts of the flow system that were not simulated. The data used in the calibration were collected during an aquifer test from October 30 to November 4, 1996. The model fit was very good, as indicated by the correlation coefficient (0.999) between the weighted simulated values and weighted observed values. The model also reproduced the general rise in ground-water levels caused by ground-water recharge and the cyclic fluctuations caused by pumping prior to the aquifer test. Contributing areas were delineated using a particle-tracking procedure. Hypothetical particles of water were introduced at each model cell in the top layer and were tracked to determine whether or not they reached the pumped well. A deterministic contributing area was calculated using the calibrated model, and a probabilistic contributing area was calculated using a Monte Carlo approach along with the calibrated model. The Monte Carlo simulation was done, using the parameter variance/covariance matrix generated by the regression model, to estimate probabilities associated with the contributing area to the wells. The probabilities arise from uncertainty in the estimated parameter values, which in turn arise from the adequacy of the data available to comprehensively describe the groundwater-flow sy
Multi-Site λ-dynamics for simulated Structure-Activity Relationship studies
Knight, Jennifer L.; Brooks, Charles L.
2011-01-01
Multi-Site λ-dynamics (MSλD) is a new free energy simulation method that is based on λ-dynamics. It has been developed to enable multiple substituents at multiple sites on a common ligand core to be modeled simultaneously and their free energies assessed. The efficacy of MSλD for estimating relative hydration free energies and relative binding affinties is demonstrated using three test systems. Model compounds representing multiple identical benzene, dihydroxybenzene and dimethoxybenzene molecules show total combined MSλD trajectory lengths of ~1.5 ns are sufficient to reliably achieve relative hydration free energy estimates within 0.2 kcal/mol and are less sensitive to the number of trajectories that are used to generate these estimates for hybrid ligands that contain up to ten substituents modeled at a single site or five substituents modeled at each of two sites. Relative hydration free energies among six benzene derivatives calculated from MSλD simulations are in very good agreement with those from alchemical free energy simulations (with average unsigned differences of 0.23 kcal/mol and R2=0.991) and experiment (with average unsigned errors of 1.8 kcal/mol and R2=0.959). Estimates of the relative binding affinities among 14 inhibitors of HIV-1 reverse transcriptase obtained from MSλD simulations are in reasonable agreement with those from traditional free energy simulations and experiment (average unsigned errors of 0.9 kcal/mol and R2=0.402). For the same level of accuracy and precision MSλD simulations are achieved ~20–50 times faster than traditional free energy simulations and thus with reliable force field parameters can be used effectively to screen tens to hundreds of compounds in structure-based drug design applications. PMID:22125476
NASA Astrophysics Data System (ADS)
Farmer, W. H.; Kiang, J. E.
2017-12-01
The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.
Freight Transportation Energy Use : Appendix. Transportation Network Model Output.
DOT National Transportation Integrated Search
1978-07-01
The overall design of the TSC Freight Energy Model is presented. A hierarchical modeling strategy is used, in which detailed modal simulators estimate the performance characteristics of transportation network elements, and the estimates are input to ...
NASA Astrophysics Data System (ADS)
Zhang, Kai; Batterman, Stuart
2010-05-01
The contribution of vehicular traffic to air pollutant concentrations is often difficult to establish. This paper utilizes both time-series and simulation models to estimate vehicle contributions to pollutant levels near roadways. The time-series model used generalized additive models (GAMs) and fitted pollutant observations to traffic counts and meteorological variables. A one year period (2004) was analyzed on a seasonal basis using hourly measurements of carbon monoxide (CO) and particulate matter less than 2.5 μm in diameter (PM 2.5) monitored near a major highway in Detroit, Michigan, along with hourly traffic counts and local meteorological data. Traffic counts showed statistically significant and approximately linear relationships with CO concentrations in fall, and piecewise linear relationships in spring, summer and winter. The same period was simulated using emission and dispersion models (Motor Vehicle Emissions Factor Model/MOBILE6.2; California Line Source Dispersion Model/CALINE4). CO emissions derived from the GAM were similar, on average, to those estimated by MOBILE6.2. The same analyses for PM 2.5 showed that GAM emission estimates were much higher (by 4-5 times) than the dispersion model results, and that the traffic-PM 2.5 relationship varied seasonally. This analysis suggests that the simulation model performed reasonably well for CO, but it significantly underestimated PM 2.5 concentrations, a likely result of underestimating PM 2.5 emission factors. Comparisons between statistical and simulation models can help identify model deficiencies and improve estimates of vehicle emissions and near-road air quality.
Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le
2015-01-01
Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589
Performance of nonlinear mixed effects models in the presence of informative dropout.
Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H
2015-01-01
Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.
NASA Technical Reports Server (NTRS)
Weaver, W. L.; Green, R. N.
1980-01-01
A study was performed on the use of geometric shape factors to estimate earth-emitted flux densities from radiation measurements with wide field-of-view flat-plate radiometers on satellites. Sets of simulated irradiance measurements were computed for unrestricted and restricted field-of-view detectors. In these simulations, the earth radiation field was modeled using data from Nimbus 2 and 3. Geometric shape factors were derived and applied to these data to estimate flux densities on global and zonal scales. For measurements at a satellite altitude of 600 km, estimates of zonal flux density were in error 1.0 to 1.2%, and global flux density errors were less than 0.2%. Estimates with unrestricted field-of-view detectors were about the same for Lambertian and non-Lambertian radiation models, but were affected by satellite altitude. The opposite was found for the restricted field-of-view detectors.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
DEVELOPMENT AND ANALYSIS OF AIR QUALITY MODELING SIMULATIONS FOR HAZARDOUS AIR POLLUTANTS
The concentrations of five hazardous air pollutants were simulated using the Community Multi Scale Air Quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
2016-07-04
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...
2016-06-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
NASA Astrophysics Data System (ADS)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura
2016-07-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Estimation of saltation emission in the Kubuqi Desert, North China.
Du, Heqiang; Xue, Xian; Wang, Tao
2014-05-01
The Kubuqi Desert suffered more severe wind erosion hazard. Every year, a mass of aeolian sand was blown in the Ten Tributaries that are tributaries of the Yellow River. To estimate the quantity of aeolian sediment blown into the Ten Tributaries from the Kubuqi Desert, it is necessary to simulate the saltation processes of the Kubuqi Desert. A saltation submodel of the IWEMS (Integrated Wind-Erosion Modeling System) and its accompanying RS (Remote Sensing) and GIS (Geographic Information System) methods were used to model saltation emissions in the Kubuqi Desert. To calibrate the saltation submodel, frontal area of vegetation, soil moisture, wind velocity and saltation sediment were observed synchronously on several points in 2011 and 2012. In this study, a model namely BEACH (Bridge Event And Continuous Hydrological) was introduced to simulate the daily soil moisture. Using the surface parameters (frontal area of vegetation and soil moisture) along with the observed wind velocities and saltation sediments for the observed points, the saltation model was calibrated and validated. To reduce the simulate error, a subdaily wind velocity program, WINDGEN was introduced in this model to simulate the hourly wind velocity of the Kubuqi Desert. By incorporating simulated hourly wind velocity, and model variables, the saltation emission of the Kubuqi Desert was modeled. The model results show that the total sediment flow rate was 1-30.99 tons/m over the last 10years (2001-2010). The saltation emission mainly occurs in the north central part of the Kubuqi Desert in winter and spring. Integrating the wind directions, the quantity of the aeolian sediment that deposits in the Ten Tributaries was estimated. Compared with the observed data by the local government and hydrometric stations, our estimation is reasonable. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-03-16
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.
Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-01-01
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552
NASA Astrophysics Data System (ADS)
Hong, Sanghyun; Erdogan, Gurkan; Hedrick, Karl; Borrelli, Francesco
2013-05-01
The estimation of the tyre-road friction coefficient is fundamental for vehicle control systems. Tyre sensors enable the friction coefficient estimation based on signals extracted directly from tyres. This paper presents a tyre-road friction coefficient estimation algorithm based on tyre lateral deflection obtained from lateral acceleration. The lateral acceleration is measured by wireless three-dimensional accelerometers embedded inside the tyres. The proposed algorithm first determines the contact patch using a radial acceleration profile. Then, the portion of the lateral acceleration profile, only inside the tyre-road contact patch, is used to estimate the friction coefficient through a tyre brush model and a simple tyre model. The proposed strategy accounts for orientation-variation of accelerometer body frame during tyre rotation. The effectiveness and performance of the algorithm are demonstrated through finite element model simulations and experimental tests with small tyre slip angles on different road surface conditions.
Gao, Nuo; Zhu, S A; He, Bin
2005-06-07
We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 +/- 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 +/- 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.
NASA Astrophysics Data System (ADS)
Tsumune, Daisuke; Aoyama, Michio; Tsubono, Takaki; Tateda, Yutaka; Misumi, Kazuhiro; Hayami, Hiroshi; Toyoda, Yasuhiro; Maeda, Yoshiaki; Yoshida, Yoshikatsu; Uematsu, Mitsuo
2014-05-01
A series of accidents at the Fukushima Dai-ichi Nuclear Power Plant following the earthquake and tsunami of 11 March 2011 resulted in the release of radioactive materials to the ocean by two major pathways, direct release from the accident site and atmospheric deposition. We reconstructed spatiotemporal variability of 137Cs activity in the ocean by the comparison model simulations and observed data. We employed a regional scale and the North Pacific scale oceanic dispersion models, an atmospheric transport model, a sediment transport model, a dynamic biological compartment model for marine biota and river runoff model to investigate the oceanic contamination. Direct releases of 137Cs were estimated for more than 2 years after the accident by comparing simulated results and observed activities very close to the site. The estimated total amounts of directly released 137Cs was 3.6±0.7 PBq. Directly release rate of 137Cs decreased exponentially with time by the end of December 2012 and then, was almost constant. The daily release rate of 137Cs was estimated to be 3.0 x 1010 Bq day-1 by the end of September 2013. The activity of directly released 137Cs was detectable only in the coastal zone after December 2012. Simulated 137Cs activities attributable to direct release were in good agreement with observed activities, a result that implies the estimated direct release rate was reasonable, while simulated 137Cs activities attributable to atmospheric deposition were low compared to measured activities. The rate of atmospheric deposition onto the ocean was underestimated because of a lack of measurements of dose rate and air activity of 137Cs over the ocean when atmospheric deposition rates were being estimated. Observed 137Cs activities attributable to atmospheric deposition in the ocean helped to improve the accuracy of simulated atmospheric deposition rates. Although there is no observed data of 137Cs activity in the ocean from 11 to 21 March 2011, observed data of marine biota should reflect the history of 137Cs activity in this early period. The comparisons between simulated 137Cs activity of marine biota by a dynamic biological compartment and observed data also suggest that simulated 137Cs activity attributable to atmospheric deposition was underestimated in this early period. In addition, river runoff model simulations suggest that the river flux of 137Cs to the ocean was effective to the 137Cs activity in the ocean in this early period. The sediment transport model simulations suggests that the inventory of 137Cs in sediment was less than 10
NASA Astrophysics Data System (ADS)
Okutani, Iwao; Mitsui, Tatsuro; Nakada, Yusuke
In this paper put forward are neuron-type models, i.e., neural network model, wavelet neuron model and three layered wavelet neuron model(WV3), for estimating traveling time between signalized intersections in order to facilitate adaptive setting of traffic signal parameters such as green time and offset. Model validation tests using simulated data reveal that compared to other models, WV3 model works very fast in learning process and can produce more accurate estimates of travel time. Also, it is exhibited that up-link information obtainable from optical beacons, i.e., travel time observed during the former cycle time in this case, makes a crucial input variable to the models in that there isn't any substantial difference between the change of estimated and simulated travel time with the change of green time or offset when up-link information is employed as input while there appears big discrepancy between them when not employed.
NASA Astrophysics Data System (ADS)
Rosland, R.; Strand, Ø.; Alunno-Bruscia, M.; Bacher, C.; Strohmeier, T.
2009-08-01
A Dynamic Energy Budget (DEB) model for simulation of growth and bioenergetics of blue mussels ( Mytilus edulis) has been tested in three low seston sites in southern Norway. The observations comprise four datasets from laboratory experiments (physiological and biometrical mussel data) and three datasets from in situ growth experiments (biometrical mussel data). Additional in situ data from commercial farms in southern Norway were used for estimation of biometrical relationships in the mussels. Three DEB parameters (shape coefficient, half saturation coefficient, and somatic maintenance rate coefficient) were estimated from experimental data, and the estimated parameters were complemented with parameter values from literature to establish a basic parameter set. Model simulations based on the basic parameter set and site specific environmental forcing matched fairly well with observations, but the model was not successful in simulating growth at the extreme low seston regimes in the laboratory experiments in which the long period of negative growth caused negative reproductive mass. Sensitivity analysis indicated that the model was moderately sensitive to changes in the parameter and initial conditions. The results show the robust properties of the DEB model as it manages to simulate mussel growth in several independent datasets from a common basic parameter set. However, the results also demonstrate limitations of Chl a as a food proxy for blue mussels and limitations of the DEB model to simulate long term starvation. Future work should aim at establishing better food proxies and improving the model formulations of the processes involved in food ingestion and assimilation. The current DEB model should also be elaborated to allow shrinking in the structural tissue in order to produce more realistic growth simulations during long periods of starvation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quaas, Johannes; Ming, Yi; Menon, Surabi
2009-04-10
Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterizes aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (Ta) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found thatmore » the model-simulated influence of aerosols on cloud droplet number concentration (Nd) compares relatively well to the satellite data at least over the ocean. The relationship between Ta and liquid water path is simulated much too strongly by the models. It is shown that this is partly related to the representation of the second aerosol indirect effect in terms of autoconversion. A positive relationship between total cloud fraction (fcld) and Ta as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld - Ta relationship, our results indicate that none can be identified as unique explanation. Relationships similar to the ones found in satellite data between Ta and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - Ta relationship show a strong positive correlation between Ta and fcld The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of Ta, and parameterisation assumptions such as a lower bound on Nd. Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of -1.5+-0.5 Wm-2. An alternative estimate obtained by scaling the simulated clear- and cloudy-sky forcings with estimates of anthropogenic Ta and satellite-retrieved Nd - Ta regression slopes, respectively, yields a global annual mean clear-sky (aerosol direct effect) estimate of -0.4+-0.2 Wm-2 and a cloudy-sky (aerosol indirect effect) estimate of -0.7+-0.5 Wm-2, with a total estimate of -1.2+-0.4 Wm-2.« less
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.
2015-12-01
The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). More than 1,700 gaged watersheds across the CONUS were modeled to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models with remotely-sensed data products (i.e. - snow water equivalent) and estimates of uncertainty. Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison. As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. - snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve simulations of streamflow for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of simulated and measured information for model development and calibration at a given location of interest. In addition, these calibration strategies have been developed to be flexible so that new data products or simulated information can be assimilated. This analysis provides a foundation to understand how well models work when streamflow data is either not available or is limited and could be used to further inform hydrologic model parameter development for ungaged areas.
Case Studies of Forecasting Ionospheric Total Electron Content
NASA Astrophysics Data System (ADS)
Mannucci, A. J.; Meng, X.; Verkhoglyadova, O. P.; Tsurutani, B.; McGranaghan, R. M.
2017-12-01
We report on medium-range forecast-mode runs of ionosphere-thermosphere coupled models that calculate ionospheric total electron content (TEC), focusing on low-latitude daytime conditions. A medium-range forecast-mode run refers to simulations that are driven by inputs that can be predicted 2-3 days in advance, for example based on simulations of the solar wind. We will present results from a weak geomagnetic storm caused by a high-speed solar wind stream on June 29, 2012. Simulations based on the Global Ionosphere Thermosphere Model (GITM) and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIEGCM) significantly over-estimate TEC in certain low latitude daytime regions, compared to TEC maps based on observations. We will present the results from a more intense coronal mass ejection (CME) driven storm where the simulations are closer to observations. We compare high latitude data sets to model inputs, such as auroral boundary and convection patterns, to assess the degree to which poorly estimated high latitude drivers may be the largest cause of discrepancy between simulations and observations. Our results reveal many factors that can affect the accuracy of forecasts, including the fidelity of empirical models used to estimate high latitude precipitation patterns, or observation proxies for solar EUV spectra, such as the F10.7 index. Implications for forecasts with few-day lead times are discussed
Asquith, W.H.; Mosier, J. G.; Bush, P.W.
1997-01-01
The watershed simulation model Hydrologic Simulation Program—Fortran (HSPF) was used to generate simulated flow (runoff) from the 13 watersheds to the six bay systems because adequate gaged streamflow data from which to estimate freshwater inflows are not available; only about 23 percent of the adjacent contributing watershed area is gaged. The model was calibrated for the gaged parts of three watersheds—that is, selected input parameters (meteorologic and hydrologic properties and conditions) that control runoff were adjusted in a series of simulations until an adequate match between model-generated flows and a set (time series) of gaged flows was achieved. The primary model input is rainfall and evaporation data and the model output is a time series of runoff volumes. After calibration, simulations driven by daily rainfall for a 26-year period (1968–93) were done for the 13 watersheds to obtain runoff under current (1983–93), predevelopment (pre-1940 streamflow and pre-urbanization), and future (2010) land-use conditions for estimating freshwater inflows and for comparing runoff under the three land-use conditions; and to obtain time series of runoff from which to estimate time series of freshwater inflows for trend analysis.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
Alexeeff, Stacey E.; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A.
2016-01-01
Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1km x 1km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R2 yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with greater than 0.9 out-of-sample R2 yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the standard errors. Land use regression models performed better in chronic effects simulations. These results can help researchers when interpreting health effect estimates in these types of studies. PMID:24896768
McGowan, Conor P.; Gardner, Beth
2013-01-01
Estimating productivity for precocial species can be difficult because young birds leave their nest within hours or days of hatching and detectability thereafter can be very low. Recently, a method for using a modified catch-curve to estimate precocial chick daily survival for age based count data was presented using Piping Plover (Charadrius melodus) data from the Missouri River. However, many of the assumptions of the catch-curve approach were not fully evaluated for precocial chicks. We developed a simulation model to mimic Piping Plovers, a fairly representative shorebird, and age-based count-data collection. Using the simulated data, we calculated daily survival estimates and compared them with the known daily survival rates from the simulation model. We conducted these comparisons under different sampling scenarios where the ecological and statistical assumptions had been violated. Overall, the daily survival estimates calculated from the simulated data corresponded well with true survival rates of the simulation. Violating the accurate aging and the independence assumptions did not result in biased daily survival estimates, whereas unequal detection for younger or older birds and violating the birth death equilibrium did result in estimator bias. Assuring that all ages are equally detectable and timing data collection to approximately meet the birth death equilibrium are key to the successful use of this method for precocial shorebirds.
Halford, Keith J.; Plume, Russell W.
2011-01-01
Assessing hydrologic effects of developing groundwater supplies in Snake Valley required numerical, groundwater-flow models to estimate the timing and magnitude of capture from streams, springs, wetlands, and phreatophytes. Estimating general water-table decline also required groundwater simulation. The hydraulic conductivity of basin fill and transmissivity of basement-rock distributions in Spring and Snake Valleys were refined by calibrating a steady state, three-dimensional, MODFLOW model of the carbonate-rock province to predevelopment conditions. Hydraulic properties and boundary conditions were defined primarily from the Regional Aquifer-System Analysis (RASA) model except in Spring and Snake Valleys. This locally refined model was referred to as the Great Basin National Park calibration (GBNP-C) model. Groundwater discharges from phreatophyte areas and springs in Spring and Snake Valleys were simulated as specified discharges in the GBNP-C model. These discharges equaled mapped rates and measured discharges, respectively. Recharge, hydraulic conductivity, and transmissivity were distributed throughout Spring and Snake Valleys with pilot points and interpolated to model cells with kriging in geologically similar areas. Transmissivity of the basement rocks was estimated because thickness is correlated poorly with transmissivity. Transmissivity estimates were constrained by aquifer-test results in basin-fill and carbonate-rock aquifers. Recharge, hydraulic conductivity, and transmissivity distributions of the GBNP-C model were estimated by minimizing a weighted composite, sum-of-squares objective function that included measurement and Tikhonov regularization observations. Tikhonov regularization observations were equations that defined preferred relations between the pilot points. Measured water levels, water levels that were simulated with RASA, depth-to-water beneath distributed groundwater and spring discharges, land-surface altitudes, spring discharge at Fish Springs, and changes in discharge on selected creek reaches were measurement observations. The effects of uncertain distributed groundwater-discharge estimates in Spring and Snake Valleys on transmissivity estimates were bounded with alternative models. Annual distributed groundwater discharges from Spring and Snake Valleys in the alternative models totaled 151,000 and 227,000 acre-feet, respectively and represented 20 percent differences from the 187,000 acre-feet per year that discharges from the GBNP-C model. Transmissivity estimates in the basin fill between Baker and Big Springs changed less than 50 percent between the two alternative models. Potential effects of pumping from Snake Valley were estimated with the Great Basin National Park predictive (GBNP-P) model, which is a transient groundwater-flow model. The hydraulic conductivity of basin fill and transmissivity of basement rock were the GBNP-C model distributions. Specific yields were defined from aquifer tests. Captures of distributed groundwater and spring discharges were simulated in the GBNP-P model using a combination of well and drain packages in MODFLOW. Simulated groundwater captures could not exceed measured groundwater-discharge rates. Four groundwater-development scenarios were investigated where total annual withdrawals ranged from 10,000 to 50,000 acre-feet during a 200-year pumping period. Four additional scenarios also were simulated that added the effects of existing pumping in Snake Valley. Potential groundwater pumping locations were limited to nine proposed points of diversion. Results are presented as maps of groundwater capture and drawdown, time series of drawdowns and discharges from selected wells, and time series of discharge reductions from selected springs and control volumes. Simulated drawdown propagation was attenuated where groundwater discharge could be captured. General patterns of groundwater capture and water-table declines were similar for all scenarios. Simulated drawdowns greater than 1 ft propagated outside of Spring and Snake Valleys after 200 years of pumping in all scenarios.
Coon, William F.
2011-01-01
Simulation of streamflows in small subbasins was improved by adjusting model parameter values to match base flows, storm peaks, and storm recessions more precisely than had been done with the original model. Simulated recessional and low flows were either increased or decreased as appropriate for a given stream, and simulated peak flows generally were lowered in the revised model. The use of suspended-sediment concentrations rather than concentrations of the surrogate constituent, total suspended solids, resulted in increases in the simulated low-flow sediment concentrations and, in most cases, decreases in the simulated peak-flow sediment concentrations. Simulated orthophosphate concentrations in base flows generally increased but decreased for peak flows in selected headwater subbasins in the revised model. Compared with the original model, phosphorus concentrations simulated by the revised model were comparable in forested subbasins, generally decreased in developed and wetland-dominated subbasins, and increased in agricultural subbasins. A final revision to the model was made by the addition of the simulation of chloride (salt) concentrations in the Onondaga Creek Basin to help water-resource managers better understand the relative contributions of salt from multiple sources in this particular tributary. The calibrated revised model was used to (1) compute loading rates for the various land types that were simulated in the model, (2) conduct a watershed-management analysis that estimated the portion of the total load that was likely to be transported to Onondaga Lake from each of the modeled subbasins, (3) compute and assess chloride loads to Onondaga Lake from the Onondaga Creek Basin, and (4) simulate precolonization (forested) conditions in the basin to estimate the probable minimum phosphorus loads to the lake.
Internal Variability and Disequilibrium Confound Estimates of Climate Sensitivity from Observations
NASA Technical Reports Server (NTRS)
Marvel, Kate; Pincus, Robert; Schmidt, Gavin A.; Miller, Ron L.
2018-01-01
An emerging literature suggests that estimates of equilibrium climate sensitivity (ECS) derived from recent observations and energy balance models are biased low because models project more positive climate feedback in the far future. Here we use simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to show that across models, ECS inferred from the recent historical period (1979-2005) is indeed almost uniformly lower than that inferred from simulations subject to abrupt increases in CO2-radiative forcing. However, ECS inferred from simulations in which sea surface temperatures are prescribed according to observations is lower still. ECS inferred from simulations with prescribed sea surface temperatures is strongly linked to changes to tropical marine low clouds. However, feedbacks from these clouds are a weak constraint on long-term model ECS. One interpretation is that observations of recent climate changes constitute a poor direct proxy for long-term sensitivity.
Internal Variability and Disequilibrium Confound Estimates of Climate Sensitivity From Observations
NASA Astrophysics Data System (ADS)
Marvel, Kate; Pincus, Robert; Schmidt, Gavin A.; Miller, Ron L.
2018-02-01
An emerging literature suggests that estimates of equilibrium climate sensitivity (ECS) derived from recent observations and energy balance models are biased low because models project more positive climate feedback in the far future. Here we use simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to show that across models, ECS inferred from the recent historical period (1979-2005) is indeed almost uniformly lower than that inferred from simulations subject to abrupt increases in CO2 radiative forcing. However, ECS inferred from simulations in which sea surface temperatures are prescribed according to observations is lower still. ECS inferred from simulations with prescribed sea surface temperatures is strongly linked to changes to tropical marine low clouds. However, feedbacks from these clouds are a weak constraint on long-term model ECS. One interpretation is that observations of recent climate changes constitute a poor direct proxy for long-term sensitivity.
Gleason, Robert A.; Tangen, Brian A.; Laubhan, Murray K.; Kermes, Kevin E.; Euliss, Ned H.
2007-01-01
Executive Summary Concern over flooding along rivers in the Prairie Pothole Region has stimulated interest in developing spatially distributed hydrologic models to simulate the effects of wetland water storage on peak river flows. Such models require spatial data on the storage volume and interception area of existing and restorable wetlands in the watershed of interest. In most cases, information on these model inputs is lacking because resolution of existing topographic maps is inadequate to estimate volume and areas of existing and restorable wetlands. Consequently, most studies have relied on wetland area to volume or interception area relationships to estimate wetland basin storage characteristics by using available surface area data obtained as a product from remotely sensed data (e.g., National Wetlands Inventory). Though application of areal input data to estimate volume and interception areas is widely used, a drawback is that there is little information available to provide guidance regarding the application, limitations, and biases associated with such approaches. Another limitation of previous modeling efforts is that water stored by wetlands within a watershed is treated as a simple lump storage component that is filled prior to routing overflow to a pour point or gaging station. This approach does not account for dynamic wetland processes that influence water stored in prairie wetlands. Further, most models have not considered the influence of human-induced hydrologic changes, such as land use, that greatly influence quantity of surface water inputs and, ultimately, the rate that a wetland basin fills and spills. The goals of this study were to (1) develop and improve methodologies for estimating and spatially depicting wetland storage volumes and interceptions areas and (2) develop models and approaches for estimating/simulating the water storage capacity of potentially restorable and existing wetlands under various restoration, land use, and climatic scenarios. To address these goals, we developed models and approaches to spatially represent storage volumes and interception areas of existing and potentially restorable wetlands in the upper Mustinka subbasin within Grant County, Minn. We then developed and applied a model to simulate wetland water storage increases that would result from restoring 25 and 50 percent of the farmed and drained wetlands in the upper Mustinka subbasin. The model simulations were performed during the growing season (May-October) for relatively wet (1993; 0.79 m of precipitation) and dry (1987; 0.40 m of precipitation) years. Results from the simulations indicated that the 25 percent restoration scenario would increase water storage by 21-24 percent and that a 50 percent scenario would increase storage by 34-38 percent. Additionally, we estimated that wetlands in the subbasin have potential to store 11.57-20.98 percent of the total precipitation that fell over the entire subbasin area (52,758 ha). Our simulation results indicated that there is considerable potential to enhance water storage in the subbasin; however, evaluation and calibration of the model is necessary before simulation results can be applied to management and planning decisions. In this report we present guidance for the development and application of models (e.g., surface area-volume predictive models, hydrology simulation model) to simulate wetland water storage to provide a basis from which to understand and predict the effects of natural or human-induced hydrologic alterations. In developing these approaches, we tried to use simple and widely available input data to simulate wetland hydrology and predict wetland water storage for a specific precipitation event or a series of events. Further, the hydrology simulation model accounted for land use and soil type, which influence surface water inputs to wetlands. Although information presented in this report is specific to the Mustinka subbasin, the approaches
Leveraging prognostic baseline variables to gain precision in randomized trials
Colantuoni, Elizabeth; Rosenblum, Michael
2015-01-01
We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. PMID:25872751
Cabaraban, Maria Theresa I; Kroll, Charles N; Hirabayashi, Satoshi; Nowak, David J
2013-05-01
A distributed adaptation of i-Tree Eco was used to simulate dry deposition in an urban area. This investigation focused on the effects of varying temperature, LAI, and NO2 concentration inputs on estimated NO2 dry deposition to trees in Baltimore, MD. A coupled modeling system is described, wherein WRF provided temperature and LAI fields, and CMAQ provided NO2 concentrations. A base case simulation was conducted using built-in distributed i-Tree Eco tools, and simulations using different inputs were compared against this base case. Differences in land cover classification and tree cover between the distributed i-Tree Eco and WRF resulted in changes in estimated LAI, which in turn resulted in variations in simulated NO2 dry deposition. Estimated NO2 removal decreased when CMAQ-derived concentration was applied to the distributed i-Tree Eco simulation. Discrepancies in temperature inputs did little to affect estimates of NO2 removal by dry deposition to trees in Baltimore. Copyright © 2013 Elsevier Ltd. All rights reserved.
Estimating demographic parameters using a combination of known-fate and open N-mixture models
Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.
2015-01-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.
Estimating demographic parameters using a combination of known-fate and open N-mixture models.
Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G
2015-10-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.
Simulation of aerobic and anaerobic biodegradation processes at a crude oil spill site
Essaid, Hedeff I.; Bekins, Barbara A.; Godsy, E. Michael; Warren, Ean; Baedecker, Mary Jo; Cozzarelli, Isabelle M.
1995-01-01
A two-dimensional, multispecies reactive solute transport model with sequential aerobic and anaerobic degradation processes was developed and tested. The model was used to study the field-scale solute transport and degradation processes at the Bemidji, Minnesota, crude oil spill site. The simulations included the biodegradation of volatile and nonvolatile fractions of dissolved organic carbon by aerobic processes, manganese and iron reduction, and methanogenesis. Model parameter estimates were constrained by published Monod kinetic parameters, theoretical yield estimates, and field biomass measurements. Despite the considerable uncertainty in the model parameter estimates, results of simulations reproduced the general features of the observed groundwater plume and the measured bacterial concentrations. In the simulation, 46% of the total dissolved organic carbon (TDOC) introduced into the aquifer was degraded. Aerobic degradation accounted for 40% of the TDOC degraded. Anaerobic processes accounted for the remaining 60% of degradation of TDOC: 5% by Mn reduction, 19% by Fe reduction, and 36% by methanogenesis. Thus anaerobic processes account for more than half of the removal of DOC at this site.
Investigation of Models and Estimation Techniques for GPS Attitude Determination
NASA Technical Reports Server (NTRS)
Garrick, J.
1996-01-01
Much work has been done in the Flight Dynamics Analysis Branch (FDAB) in developing algorithms to met the new and growing field of attitude determination using the Global Positioning SYstem (GPS) constellation of satellites. Flight Dynamics has the responsibility to investigate any new technology and incorporate the innovations in the attitude ground support systems developed to support future missions. The work presented here is an investigative analysis that will produce the needed adaptation to allow the Flight Dynamics Support System (FDSS) to incorporate GPS phase measurements and produce observation measurements compatible with the FDSS. A simulator was developed to produce the necessary measurement data to test the models developed for the different estimation techniques used by FDAB. This paper gives an overview of the current modeling capabilities of the simulator models and algorithms for the adaptation of GPS measurement data and results from each of the estimation techniques. Future analysis efforts to evaluate the simulator and models against inflight GPS measurement data are also outlined.
A River Discharge Model for Coastal Taiwan during Typhoon Morakot
2012-08-01
Multidisciplinary Simulation, Estimation, and Assimilation Systems Reports in Ocean Science and Engineering MSEAS-13 A River Discharge...in this region. The island’s major rivers have correspondingly large drainage basins, and outflow from these river mouths can substantially reduce the...Multidisciplinary Simulation, Estimation, and Assimilation System (MSEAS) has been used to simulate the ocean dynamics and forecast the uncertainty
A simulation of probabilistic wildfire risk components for the continental United States
Mark A. Finney; Charles W. McHugh; Isaac C. Grenfell; Karin L. Riley; Karen C. Short
2011-01-01
This simulation research was conducted in order to develop a large-fire risk assessment system for the contiguous land area of the United States. The modeling system was applied to each of 134 Fire Planning Units (FPUs) to estimate burn probabilities and fire size distributions. To obtain stable estimates of these quantities, fire ignition and growth was simulated for...
GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data
NASA Astrophysics Data System (ADS)
Ibrahim, Noor Akma; Suliadi
2010-11-01
In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
ERIC Educational Resources Information Center
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
Simulating eroded soil organic carbon with the SWAT-C model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuesong
The soil erosion and associated lateral movement of eroded carbon (C) have been identified as a possible mechanism explaining the elusive terrestrial C sink of ca. 1.7-2.6 PgC yr(-1). Here we evaluated the SWAT-C model for simulating long-term soil erosion and associated eroded C yields. Our method couples the CENTURY carbon cycling processes with a Modified Universal Soil Loss Equation (MUSLE) to estimate C losses associated with soil erosion. The results show that SWAT-C is able to simulate well long-term average eroded C yields, as well as correctly estimate the relative magnitude of eroded C yields by crop rotations. Wemore » also evaluated three methods of calculating C enrichment ratio in mobilized sediments, and found that errors associated with enrichment ratio estimation represent a significant uncertainty in SWAT-C simulations. Furthermore, we discussed limitations and future development directions for SWAT-C to advance C cycling modeling and assessment.« less
Development of PARMA: PHITS-based analytical radiation model in the atmosphere.
Sato, Tatsuhiko; Yasuda, Hiroshi; Niita, Koji; Endo, Akira; Sihver, Lembit
2008-08-01
Estimation of cosmic-ray spectra in the atmosphere has been essential for the evaluation of aviation doses. We therefore calculated these spectra by performing Monte Carlo simulation of cosmic-ray propagation in the atmosphere using the PHITS code. The accuracy of the simulation was well verified by experimental data taken under various conditions, even near sea level. Based on a comprehensive analysis of the simulation results, we proposed an analytical model for estimating the cosmic-ray spectra of neutrons, protons, helium ions, muons, electrons, positrons and photons applicable to any location in the atmosphere at altitudes below 20 km. Our model, named PARMA, enables us to calculate the cosmic radiation doses rapidly with a precision equivalent to that of the Monte Carlo simulation, which requires much more computational time. With these properties, PARMA is capable of improving the accuracy and efficiency of the cosmic-ray exposure dose estimations not only for aircrews but also for the public on the ground.
NASA Astrophysics Data System (ADS)
Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.
2017-12-01
There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation results from advanced methods such as variational inverse modeling, and Bayesian inference and stochastic sampling techniques. Future directions including other types of observations, other hydrocarbons being considered, and assessment of additional emission estimation methods will be discussed.
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Chen, T; Besio, W; Dai, W
2009-01-01
A comparison of the performance of the tripolar and bipolar concentric as well as spline Laplacian electrocardiograms (LECGs) and body surface Laplacian mappings (BSLMs) for localizing and imaging the cardiac electrical activation has been investigated based on computer simulation. In the simulation a simplified eccentric heart-torso sphere-cylinder homogeneous volume conductor model were developed. Multiple dipoles with different orientations were used to simulate the underlying cardiac electrical activities. Results show that the tripolar concentric ring electrodes produce the most accurate LECG and BSLM estimation among the three estimators with the best performance in spatial resolution.
NASA Astrophysics Data System (ADS)
Maples, S.; Fogg, G. E.; Harter, T.
2015-12-01
Accurate estimation of groundwater (GW) budgets and effective management of agricultural GW pumping remains a challenge in much of California's Central Valley (CV) due to a lack of irrigation well metering. CVHM and C2VSim are two regional-scale integrated hydrologic models that provide estimates of historical and current CV distributed pumping rates. However, both models estimate GW pumping using conceptually different agricultural water models with uncertainties that have not been adequately investigated. Here, we evaluate differences in distributed agricultural GW pumping and recharge estimates related to important differences in the conceptual framework and model assumptions used to simulate surface water (SW) and GW interaction across the root zone. Differences in the magnitude and timing of GW pumping and recharge were evaluated for a subregion (~1000 mi2) coincident with Yolo County, CA, to provide similar initial and boundary conditions for both models. Synthetic, multi-year datasets of land-use, precipitation, evapotranspiration (ET), and SW deliveries were prescribed for each model to provide realistic end-member scenarios for GW-pumping demand and recharge. Results show differences in the magnitude and timing of GW-pumping demand, deep percolation, and recharge. Discrepancies are related, in large part, to model differences in the estimation of ET requirements and representation of soil-moisture conditions. CVHM partitions ET demand, while C2VSim uses a bulk ET rate, resulting in differences in both crop-water and GW-pumping demand. Additionally, CVHM assumes steady-state soil-moisture conditions, and simulates deep percolation as a function of irrigation inefficiencies, while C2VSim simulates deep percolation as a function of transient soil-moisture storage conditions. These findings show that estimates of GW-pumping demand are sensitive to these important conceptual differences, which can impact conjunctive-use water management decisions in the CV.
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
Optimizing Fukushima Emissions Through Pattern Matching and Genetic Algorithms
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Simpson, M. D.; Philip, C. S.; Baskett, R.
2017-12-01
Hazardous conditions during the Fukushima Daiichi nuclear power plant (NPP) accident hindered direct observations of the emissions of radioactive materials into the atmosphere. A wide range of emissions are estimated from bottom-up studies using reactor inventories and top-down approaches based on inverse modeling. We present a new inverse modeling estimate of cesium-137 emitted from the Fukushima NPP. Our estimate considers weather uncertainty through a large ensemble of Weather Research and Forecasting model simulations and uses the FLEXPART atmospheric dispersion model to transport and deposit cesium. The simulations are constrained by observations of the spatial distribution of cumulative cesium deposited on the surface of Japan through April 2, 2012. Multiple spatial metrics are used to quantify differences between observed and simulated deposition patterns. In order to match the observed pattern, we use a multi-objective genetic algorithm to optimize the time-varying emissions. We find that large differences with published bottom-up estimates are required to explain the observations. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
A rapid solvent accessible surface area estimator for coarse grained molecular simulations.
Wei, Shuai; Brooks, Charles L; Frank, Aaron T
2017-06-05
The rapid and accurate calculation of solvent accessible surface area (SASA) is extremely useful in the energetic analysis of biomolecules. For example, SASA models can be used to estimate the transfer free energy associated with biophysical processes, and when combined with coarse-grained simulations, can be particularly useful for accounting for solvation effects within the framework of implicit solvent models. In such cases, a fast and accurate, residue-wise SASA predictor is highly desirable. Here, we develop a predictive model that estimates SASAs based on Cα-only protein structures. Through an extensive comparison between this method and a comparable method, POPS-R, we demonstrate that our new method, Protein-C α Solvent Accessibilities or PCASA, shows better performance, especially for unfolded conformations of proteins. We anticipate that this model will be quite useful in the efficient inclusion of SASA-based solvent free energy estimations in coarse-grained protein folding simulations. PCASA is made freely available to the academic community at https://github.com/atfrank/PCASA. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
An empirical approach for estimating natural regeneration for the Forest Vegetation Simulator
Don Vandendriesche
2010-01-01
The âpartialâ establishment model that is available for most Forest Vegetation Simulator (FVS) geographic variants does not provide an estimate of natural regeneration. Users are responsible for supplying this key aspect of stand development. The process presented for estimating natural regeneration begins by summarizing small tree components based on observations from...
NASA Astrophysics Data System (ADS)
Ferdous, Nazneen; Bhat, Chandra R.
2013-01-01
This paper proposes and estimates a spatial panel ordered-response probit model with temporal autoregressive error terms to analyze changes in urban land development intensity levels over time. Such a model structure maintains a close linkage between the land owner's decision (unobserved to the analyst) and the land development intensity level (observed by the analyst) and accommodates spatial interactions between land owners that lead to spatial spillover effects. In addition, the model structure incorporates spatial heterogeneity as well as spatial heteroscedasticity. The resulting model is estimated using a composite marginal likelihood (CML) approach that does not require any simulation machinery and that can be applied to data sets of any size. A simulation exercise indicates that the CML approach recovers the model parameters very well, even in the presence of high spatial and temporal dependence. In addition, the simulation results demonstrate that ignoring spatial dependency and spatial heterogeneity when both are actually present will lead to bias in parameter estimation. A demonstration exercise applies the proposed model to examine urban land development intensity levels using parcel-level data from Austin, Texas.
Heat as a tracer to estimate dissolved organic carbon flux from a restored wetland
Burow, K.R.; Constantz, J.; Fujii, R.
2005-01-01
Heat was used as a natural tracer to characterize shallow ground water flow beneath a complex wetland system. Hydrogeologic data were combined with measured vertical temperature profiles to constrain a series of two-dimensional, transient simulations of ground water flow and heat transport using the model code SUTRA (Voss 1990). The measured seasonal temperature signal reached depths of 2.7 m beneath the pond. Hydraulic conductivity was varied in each of the layers in the model in a systematic manual calibration of the two-dimensional model to obtain the best fit to the measured temperature and hydraulic head. Results of a series of representative best-fit simulations represent a range in hydraulic conductivity values that had the best agreement between simulated and observed temperatures and that resulted in simulated pond seepage values within 1 order of magnitude of pond seepage estimated from the water budget. Resulting estimates of ground water discharge to an adjacent agricultural drainage ditch were used to estimate potential dissolved organic carbon (DOC) loads resulting from the restored wetland. Estimated DOC loads ranged from 45 to 1340 g C/(m2 year), which is higher than estimated DOC loads from surface water. In spite of the complexity in characterizing ground water flow in peat soils, using heat as a tracer provided a constrained estimate of subsurface flow from the pond to the agricultural drainage ditch. Copyright ?? 2005 National Ground Water Association.
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Unthank, Michael D.
2013-01-01
The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with revised estimates of conductance values for the riverbed and valley wall, hydraulic conductivities for the model layer, and target water levels for future simulations. Additional model layers, a redesigned model grid, and revised boundary conditions could provide a better framework for more accurate simulations. Additional numerical methods would identify possible parameter estimates and determine parameter sensitivities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Jacob; Wood, Eric W; Zhu, Lei
A data-driven technique for estimation of energy requirements for a proposed vehicle trip has been developed. Based on over 700,000 miles of driving data, the technique has been applied to generate a model that estimates trip energy requirements. The model uses a novel binning approach to categorize driving by road type, traffic conditions, and driving profile. The trip-level energy estimations can easily be aggregated to any higher-level transportation system network desired. The model has been tested and validated on the Austin, Texas, data set used to build this model. Ground-truth energy consumption for the data set was obtained from Futuremore » Automotive Systems Technology Simulator (FASTSim) vehicle simulation results. The energy estimation model has demonstrated 12.1 percent normalized total absolute error. The energy estimation from the model can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations, to reduce energy consumption. The model can also be used to determine more accurate energy consumption of regional or national transportation networks if trip origin and destinations are known. Additionally, this method allows the estimation tool to be tuned to a specific driver or vehicle type.« less
An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating
NASA Astrophysics Data System (ADS)
Ratcliffe, M. J.; Lieven, N. A. J.
1999-03-01
Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.
A simulation-based approach for estimating premining water quality: Red Mountain Creek, Colorado
Runkel, Robert L.; Kimball, Briant A; Walton-Day, Katherine; Verplanck, Philip L.
2007-01-01
Regulatory agencies are often charged with the task of setting site-specific numeric water quality standards for impaired streams. This task is particularly difficult for streams draining highly mineralized watersheds with past mining activity. Baseline water quality data obtained prior to mining are often non-existent and application of generic water quality standards developed for unmineralized watersheds is suspect given the geology of most watersheds affected by mining. Various approaches have been used to estimate premining conditions, but none of the existing approaches rigorously consider the physical and geochemical processes that ultimately determine instream water quality. An approach based on simulation modeling is therefore proposed herein. The approach utilizes synoptic data that provide spatially-detailed profiles of concentration, streamflow, and constituent load along the study reach. This field data set is used to calibrate a reactive stream transport model that considers the suite of physical and geochemical processes that affect constituent concentrations during instream transport. A key input to the model is the quality and quantity of waters entering the study reach. This input is based on chemical analyses available from synoptic sampling and observed increases in streamflow along the study reach. Given the calibrated model, additional simulations are conducted to estimate premining conditions. In these simulations, the chemistry of mining-affected sources is replaced with the chemistry of waters that are thought to be unaffected by mining (proximal, premining analogues). The resultant simulations provide estimates of premining water quality that reflect both the reduced loads that were present prior to mining and the processes that affect these loads as they are transported downstream. This simulation-based approach is demonstrated using data from Red Mountain Creek, Colorado, a small stream draining a heavily-mined watershed. Model application to the premining problem for Red Mountain Creek is based on limited field reconnaissance and chemical analyses; additional field work and analyses may be needed to develop definitive, quantitative estimates of premining water quality.
Baird, Rachel; Maxwell, Scott E
2016-06-01
Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Time Domain Tool Validation Using ARES I-X Flight Data
NASA Technical Reports Server (NTRS)
Hough, Steven; Compton, James; Hannan, Mike; Brandon, Jay
2011-01-01
The ARES I-X vehicle was launched from NASA's Kennedy Space Center (KSC) on October 28, 2009 at approximately 11:30 EDT. ARES I-X was the first test flight for NASA s ARES I launch vehicle, and it was the first non-Shuttle launch vehicle designed and flown by NASA since Saturn. The ARES I-X had a 4-segment solid rocket booster (SRB) first stage and a dummy upper stage (US) to emulate the properties of the ARES I US. During ARES I-X pre-flight modeling and analysis, six (6) independent time domain simulation tools were developed and cross validated. Each tool represents an independent implementation of a common set of models and parameters in a different simulation framework and architecture. Post flight data and reconstructed models provide the means to validate a subset of the simulations against actual flight data and to assess the accuracy of pre-flight dispersion analysis. Post flight data consists of telemetered Operational Flight Instrumentation (OFI) data primarily focused on flight computer outputs and sensor measurements as well as Best Estimated Trajectory (BET) data that estimates vehicle state information from all available measurement sources. While pre-flight models were found to provide a reasonable prediction of the vehicle flight, reconstructed models were generated to better represent and simulate the ARES I-X flight. Post flight reconstructed models include: SRB propulsion model, thrust vector bias models, mass properties, base aerodynamics, and Meteorological Estimated Trajectory (wind and atmospheric data). The result of the effort is a set of independently developed, high fidelity, time-domain simulation tools that have been cross validated and validated against flight data. This paper presents the process and results of high fidelity aerospace modeling, simulation, analysis and tool validation in the time domain.
NASA Astrophysics Data System (ADS)
Legates, David R.; Junghenn, Katherine T.
2018-04-01
Many local weather station networks that measure a number of meteorological variables (i.e. , mesonetworks) have recently been established, with soil moisture occasionally being part of the suite of measured variables. These mesonetworks provide data from which detailed estimates of various hydrological parameters, such as precipitation and reference evapotranspiration, can be made which, when coupled with simple surface characteristics available from soil surveys, can be used to obtain estimates of soil moisture. The question is Can meteorological data be used with a simple hydrologic model to estimate accurately daily soil moisture at a mesonetwork site? Using a state-of-the-art mesonetwork that also includes soil moisture measurements across the US State of Delaware, the efficacy of a simple, modified Thornthwaite/Mather-based daily water balance model based on these mesonetwork observations to estimate site-specific soil moisture is determined. Results suggest that the model works reasonably well for most well-drained sites and provides good qualitative estimates of measured soil moisture, often near the accuracy of the soil moisture instrumentation. The model exhibits particular trouble in that it cannot properly simulate the slow drainage that occurs in poorly drained soils after heavy rains and interception loss, resulting from grass not being short cropped as expected also adversely affects the simulation. However, the model could be tuned to accommodate some non-standard siting characteristics.
Roelker, Sarah A; Caruthers, Elena J; Baker, Rachel K; Pelz, Nicholas C; Chaudhari, Ajit M W; Siston, Robert A
2017-11-01
With more than 29,000 OpenSim users, several musculoskeletal models with varying levels of complexity are available to study human gait. However, how different model parameters affect estimated joint and muscle function between models is not fully understood. The purpose of this study is to determine the effects of four OpenSim models (Gait2392, Lower Limb Model 2010, Full-Body OpenSim Model, and Full Body Model 2016) on gait mechanics and estimates of muscle forces and activations. Using OpenSim 3.1 and the same experimental data for all models, six young adults were scaled in each model, gait kinematics were reproduced, and static optimization estimated muscle function. Simulated measures differed between models by up to 6.5° knee range of motion, 0.012 Nm/Nm peak knee flexion moment, 0.49 peak rectus femoris activation, and 462 N peak rectus femoris force. Differences in coordinate system definitions between models altered joint kinematics, influencing joint moments. Muscle parameter and joint moment discrepancies altered muscle activations and forces. Additional model complexity yielded greater error between experimental and simulated measures; therefore, this study suggests Gait2392 is a sufficient model for studying walking in healthy young adults. Future research is needed to determine which model(s) is best for tasks with more complex motion.
Strauch, Kellan R.; Linard, Joshua I.
2009-01-01
The U.S. Geological Survey, in cooperation with the Upper Elkhorn, Lower Elkhorn, Upper Loup, Lower Loup, Middle Niobrara, Lower Niobrara, Lewis and Clark, and Lower Platte North Natural Resources Districts, used the Soil and Water Assessment Tool to simulate streamflow and estimate percolation in north-central Nebraska to aid development of long-term strategies for management of hydrologically connected ground and surface water. Although groundwater models adequately simulate subsurface hydrologic processes, they often are not designed to simulate the hydrologically complex processes occurring at or near the land surface. The use of watershed models such as the Soil and Water Assessment Tool, which are designed specifically to simulate surface and near-subsurface processes, can provide helpful insight into the effects of surface-water hydrology on the groundwater system. The Soil and Water Assessment Tool was calibrated for five stream basins in the Elkhorn-Loup Groundwater Model study area in north-central Nebraska to obtain spatially variable estimates of percolation. Six watershed models were calibrated to recorded streamflow in each subbasin by modifying the adjustment parameters. The calibrated parameter sets were then used to simulate a validation period; the validation period was half of the total streamflow period of record with a minimum requirement of 10 years. If the statistical and water-balance results for the validation period were similar to those for the calibration period, a model was considered satisfactory. Statistical measures of each watershed model's performance were variable. These objective measures included the Nash-Sutcliffe measure of efficiency, the ratio of the root-mean-square error to the standard deviation of the measured data, and an estimate of bias. The model met performance criteria for the bias statistic, but failed to meet statistical adequacy criteria for the other two performance measures when evaluated at a monthly time step. A primary cause of the poor model validation results was the inability of the model to reproduce the sustained base flow and streamflow response to precipitation that was observed in the Sand Hills region. The watershed models also were evaluated based on how well they conformed to the annual mass balance (precipitation equals the sum of evapotranspiration, streamflow/runoff, and deep percolation). The model was able to adequately simulate annual values of evapotranspiration, runoff, and precipitation in comparison to reported values, which indicates the model may provide reasonable estimates of annual percolation. Mean annual percolation estimated by the model as basin averages varied within the study area from a maximum of 12.9 inches in the Loup River Basin to a minimum of 1.5 inches in the Shell Creek Basin. Percolation also varied within the studied basins; basin headwaters tended to have greater percolation rates than downstream areas. This variance in percolation rates was mainly was because of the predominance of sandy, highly permeable soils in the upstream areas of the modeled basins.
Observing and Simulating Diapycnal Mixing in the Canadian Arctic Archipelago
NASA Astrophysics Data System (ADS)
Hughes, K.; Klymak, J. M.; Hu, X.; Myers, P. G.; Williams, W. J.; Melling, H.
2016-12-01
High-spatial-resolution observations in the central Canadian Arctic Archipelago are analysed in conjunction with process-oriented modelling to estimate the flow pathways among the constricted waterways, understand the nature of the hydraulic control(s), and assess the influence of smaller scale (metres to kilometres) phenomena such as internal waves and topographically induced eddies. The observations repeatedly display isopycnal displacements of 50 m as dense water plunges over a sill. Depth-averaged turbulent dissipation rates near the sill estimated from these observations are typically 10-6-10-5 W kg-1, a range that is three orders of magnitude larger than that for the open ocean. These and other estimates are compared against a 1/12° basin-scale model from which we estimate diapycnal mixing rates using a volume-integrated advection-diffusion equation. Much of the mixing in this simulation is concentrated near constrictions within Barrow Strait and Queens Channel, the latter being our observational site. This suggests the model is capable of capturing topographically induced mixing. However, such mixing is expected to be enhanced in the presence of tides, a process not included in our basin scale simulation or other similar models. Quantifying this enhancement is another objective of our process-oriented modelling.
Cognitive diagnosis modelling incorporating item response times.
Zhan, Peida; Jiao, Hong; Liao, Dandan
2018-05-01
To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy
Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.
Potential Predictability of U.S. Summer Climate with "Perfect" Soil Moisture
NASA Technical Reports Server (NTRS)
Yang, Fanglin; Kumar, Arun; Lau, K.-M.
2004-01-01
The potential predictability of surface-air temperature and precipitation over the United States continent was assessed for a GCM forced by observed sea surface temperatures and an estimate of observed ground soil moisture contents. The latter was obtained by substituting the GCM simulated precipitation, which is used to drive the GCM's land-surface component, with observed pentad-mean precipitation at each time step of the model's integration. With this substitution, the simulated soil moisture correlates well with an independent estimate of observed soil moisture in all seasons over the entire US continent. Significant enhancements on the predictability of surface-air temperature and precipitation were found in boreal late spring and summer over the US continent. Anomalous pattern correlations of precipitation and surface-air temperature over the US continent in the June-July-August season averaged for the 1979-2000 period increased from 0.01 and 0.06 for the GCM simulations without precipitation substitution to 0.23 and 0.3 1, respectively, for the simulations with precipitation substitution. Results provide an estimate for the limits of potential predictability if soil moisture variability is to be perfectly predicted. However, this estimate may be model dependent, and needs to be substantiated by other modeling groups.
NASA Technical Reports Server (NTRS)
DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.
2013-01-01
Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
NASA Astrophysics Data System (ADS)
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.
Estimating daily forest carbon fluxes using a combination of ground and remotely sensed data
NASA Astrophysics Data System (ADS)
Chirici, Gherardo; Chiesi, Marta; Corona, Piermaria; Salvati, Riccardo; Papale, Dario; Fibbi, Luca; Sirca, Costantino; Spano, Donatella; Duce, Pierpaolo; Marras, Serena; Matteucci, Giorgio; Cescatti, Alessandro; Maselli, Fabio
2016-02-01
Several studies have demonstrated that Monteith's approach can efficiently predict forest gross primary production (GPP), while the modeling of net ecosystem production (NEP) is more critical, requiring the additional simulation of forest respirations. The NEP of different forest ecosystems in Italy was currently simulated by the use of a remote sensing driven parametric model (modified C-Fix) and a biogeochemical model (BIOME-BGC). The outputs of the two models, which simulate forests in quasi-equilibrium conditions, are combined to estimate the carbon fluxes of actual conditions using information regarding the existing woody biomass. The estimates derived from the methodology have been tested against daily reference GPP and NEP data collected through the eddy correlation technique at five study sites in Italy. The first test concerned the theoretical validity of the simulation approach at both annual and daily time scales and was performed using optimal model drivers (i.e., collected or calibrated over the site measurements). Next, the test was repeated to assess the operational applicability of the methodology, which was driven by spatially extended data sets (i.e., data derived from existing wall-to-wall digital maps). A good estimation accuracy was generally obtained for GPP and NEP when using optimal model drivers. The use of spatially extended data sets worsens the accuracy to a varying degree, which is properly characterized. The model drivers with the most influence on the flux modeling strategy are, in increasing order of importance, forest type, soil features, meteorology, and forest woody biomass (growing stock volume).
Flight dynamics analysis and simulation of heavy lift airships. Volume 2: Technical manual
NASA Technical Reports Server (NTRS)
Ringland, R. F.; Tischler, M. B.; Jex, H. R.; Emmen, R. D.; Ashkenas, I. L.
1982-01-01
The mathematical models embodied in the simulation are described in considerable detail and with supporting evidence for the model forms chosen. In addition the trimming and linearization algorithms used in the simulation are described. Appendices to the manual identify reference material for estimating the needed coefficients for the input data and provide example simulation results.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Kelly, Brian P.; Pickett, Linda L.; Hansen, Cristi V.; Ziegler, Andrew C.
2013-01-01
The Equus Beds aquifer is a primary water-supply source for Wichita, Kansas and the surrounding area because of shallow depth to water, large saturated thickness, and generally good water quality. Substantial water-level declines in the Equus Beds aquifer have resulted from pumping groundwater for agricultural and municipal needs, as well as periodic drought conditions. In March 2006, the city of Wichita began construction of the Equus Beds Aquifer Storage and Recovery project to store and later recover groundwater, and to form a hydraulic barrier to the known chloride-brine plume near Burrton, Kansas. In October 2009, the U.S. Geological Survey, in cooperation with the city of Wichita, began a study to determine groundwater flow in the area of the Wichita well field, and chloride transport from the Arkansas River and Burrton oilfield to the Wichita well field. Groundwater flow was simulated for the Equus Beds aquifer using the three-dimensional finite-difference groundwater-flow model MODFLOW-2000. The model simulates steady-state and transient conditions. The groundwater-flow model was calibrated by adjusting model input data and model geometry until model results matched field observations within an acceptable level of accuracy. The root mean square (RMS) error for water-level observations for the steady-state calibration simulation is 9.82 feet. The ratio of the RMS error to the total head loss in the model area is 0.049 and the mean error for water-level observations is 3.86 feet. The difference between flow into the model and flow out of the model across all model boundaries is -0.08 percent of total flow for the steady-state calibration. The RMS error for water-level observations for the transient calibration simulation is 2.48 feet, the ratio of the RMS error to the total head loss in the model area is 0.0124, and the mean error for water-level observations is 0.03 feet. The RMS error calculated for observed and simulated base flow gains or losses for the Arkansas River for the transient simulation is 7,916,564 cubic feet per day (91.6 cubic feet per second) and the RMS error divided by (/) the total range in streamflow (7,916,564/37,461,669 cubic feet per day) is 22 percent. The RMS error calculated for observed and simulated streamflow gains or losses for the Little Arkansas River for the transient simulation is 5,610,089 cubic feet per day(64.9 cubic feet per second) and the RMS error divided by the total range in streamflow (5,612,918/41,791,091 cubic feet per day) is 13 percent. The mean error between observed and simulated base flow gains or losses was 29,999 cubic feet per day (0.34 cubic feet per second) for the Arkansas River and -1,369,250 cubic feet per day (-15.8 cubic feet per second) for the Little Arkansas River. Cumulative streamflow gain and loss observations are similar to the cumulative simulated equivalents. Average percent mass balance difference for individual stress periods ranged from -0.46 to 0.51 percent. The cumulative mass balance for the transient calibration was 0.01 percent. Composite scaled sensitivities indicate the simulations are most sensitive to parameters with a large areal distribution. For the steady-state calibration, these parameters include recharge, hydraulic conductivity, and vertical conductance. For the transient simulation, these parameters include evapotranspiration, recharge, and hydraulic conductivity. The ability of the calibrated model to account for the additional groundwater recharged to the Equus Beds aquifer as part of the Aquifer Storage and Recovery project was assessed by using the U.S. Geological Survey subregional water budget program ZONEBUDGET and comparing those results to metered recharge for 2007 and 2008 and previous estimates of artificial recharge. The change in storage between simulations is the volume of water that estimates the recharge credit for the aquifer storage and recovery system. The estimated increase in storage of 1,607 acre-ft in the basin storage area compared to metered recharge of 1,796 acre-ft indicates some loss of metered recharge. Increased storage outside of the basin storage area of 183 acre-ft accounts for all but 6 acre-ft or 0.33 percent of the total. Previously estimated recharge credits for 2007 and 2008 are 1,018 and 600 acre-ft, respectively, and a total estimated recharge credit of 1,618 acre-ft. Storage changes calculated for this study are 4.42 percent less for 2007 and 5.67 percent more for 2008 than previous estimates. Total storage change for 2007 and 2008 is 0.68 percent less than previous estimates. The small difference between the increase in storage from artificial recharge estimated with the groundwater-flow model and metered recharge indicates the groundwater model correctly accounts for the additional water recharged to the Equus Beds aquifer as part of the Aquifer Storage and Recovery project. Small percent differences between inflows and outflows for all stress periods and all index cells in the basin storage area, improved calibration compared to the previous model, and a reasonable match between simulated and measured long-term base flow indicates the groundwater model accurately simulates groundwater flow in the study area. The change in groundwater level through recent years compared to the August 1940 groundwater level map has been documented and used to assess the change of storage volume of the Equus Beds aquifer in and near the Wichita well field for three different areas. Two methods were used to estimate changes in storage from simulation results using simulated change in groundwater levels in layer 1 between stress periods, and using ZONEBUDGET to calculate the change in storage in the same way the effects of artificial recharge were estimated within the basin storage area. The three methods indicate similar trends although the magnitude of storage changes differ. Information about the change in storage in response to hydrologic stresses is important for managing groundwater resources in the study area. The comparison between the three methods indicates similar storage change trends are estimated and each could be used to determine relative increases or decreases in storage. Use of groundwater level changes that do not include storage changes that occur in confined or semi-confined parts of the aquifer will slightly underestimate storage changes; however, use of specific yield and groundwater level changes to estimate storage change in confined or semi-confined parts of the aquifer will overestimate storage changes. Using only changes in shallow groundwater levels would provide more accurate storage change estimates for the measured groundwater levels method. The value used for specific yield is also an important consideration when estimating storage. For the Equus Beds aquifer the reported specific yield ranges between 0.08 and 0.35 and the storage coefficient (for confined conditions) ranges between 0.0004 and 0.16. Considering the importance of the value of specific yield and storage coefficient to estimates of storage change over time, and the wide range and substantial overlap for the reported values for specific yield and storage coefficient in the study area, further information on the distribution of specific yield and storage coefficient within the Equus Beds aquifer in the study area would greatly enhance the accuracy of estimated storage changes using both simulated groundwater level, simulated groundwater budget, or measured groundwater level methods.
Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management
A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...
Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher
2017-09-01
Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.
Evaluation of infiltration models in contaminated landscape.
Sadegh Zadeh, Kouroush; Shirmohammadi, Adel; Montas, Hubert J; Felton, Gary
2007-06-01
The infiltration models of Kostiakov, Green-Ampt, and Philip (two and three terms equations) were used, calibrated, and evaluated to simulate in-situ infiltration in nine different soil types. The Osborne-Moré modified version of the Levenberg-Marquardt optimization algorithm was coupled with the experimental data obtained by the double ring infiltrometers and the infiltration equations, to estimate the model parameters. Comparison of the model outputs with the experimental data indicates that the models can successfully describe cumulative infiltration in different soil types. However, since Kostiakov's equation fails to accurately simulate the infiltration rate as time approaches infinity, Philip's two-term equation, in some cases, produces negative values for the saturated hydraulic conductivity of soils, and the Green-Ampt model uses piston flow assumptions, we suggest using Philip's three-term equation to simulate infiltration and to estimate the saturated hydraulic conductivity of soils.
Lizarraga, Joy S.; Ockerman, Darwin J.
2010-01-01
The U.S. Geological Survey (USGS), in cooperation with the San Antonio River Authority, the Evergreen Underground Water Conservation District, and the Goliad County Groundwater Conservation District, configured, calibrated, and tested a watershed model for a study area consisting of about 2,150 square miles of the lower San Antonio River watershed in Bexar, Guadalupe, Wilson, Karnes, DeWitt, Goliad, Victoria, and Refugio Counties in south-central Texas. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge using rainfall, potential ET, and upstream discharge data obtained from National Weather Service meteorological stations and USGS streamflow-gaging stations. Additional time-series inputs to the model include wastewater treatment-plant discharges, withdrawals for cropland irrigation, and estimated inflows from springs. Model simulations of streamflow, ET, and groundwater recharge were done for 2000-2007. Because of the complexity of the study area, the lower San Antonio River watershed was divided into four subwatersheds; separate HSPF models were developed for each subwatershed. Simulation of the overall study area involved running simulations of the three upstream models, then running the downstream model. The surficial geology was simplified as nine contiguous water-budget zones to meet model computational limitations and also to define zones for which ET, recharge, and other water-budget information would be output by the model. The model was calibrated and tested using streamflow data from 10 streamflow-gaging stations; additionally, simulated ET was compared with measured ET from a meteorological station west of the study area. The model calibration is considered very good; streamflow volumes were calibrated to within 10 percent of measured streamflow volumes. During 2000-2007, the estimated annual mean rainfall for the water-budget zones ranged from 33.7 to 38.5 inches per year; the estimated annual mean rainfall for the entire watershed was 34.3 inches. Using the HSPF model it was estimated that for 2000-2007, less than 10 percent of the annual mean rainfall on the study watershed exited the watershed as streamflow, whereas about 82 percent, or an average of 28.2 inches per year, exited the watershed as ET. Estimated annual mean groundwater recharge for the entire study area was 3.0 inches, or about 9 percent of annual mean rainfall. Estimated annual mean recharge was largest in water-budget zone 3, the zone where the Carrizo Sand outcrops. In water-budget zone 3, the estimated annual mean recharge was 5.1 inches or about 15 percent of annual mean rainfall. Estimated annual mean recharge was smallest in water-budget zone 6, about 1.1 inches or about 3 percent of annual mean rainfall. The Cibolo Creek subwatershed and the subwatershed of the San Antonio River upstream from Cibolo Creek had the largest and smallest basin yields, about 4.8 inches and 1.2 inches, respectively. Estimated annual ET and annual recharge generally increased with increasing annual rainfall. Also, ET was larger in zones 8 and 9, the most downstream zones in the watershed. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error.
The implementation of sea ice model on a regional high-resolution scale
NASA Astrophysics Data System (ADS)
Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter
2015-09-01
The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.
Estimation of population size using open capture-recapture models
McDonald, T.L.; Amstrup, Steven C.
2001-01-01
One of the most important needs for wildlife managers is an accurate estimate of population size. Yet, for many species, including most marine species and large mammals, accurate and precise estimation of numbers is one of the most difficult of all research challenges. Open-population capture-recapture models have proven useful in many situations to estimate survival probabilities but typically have not been used to estimate population size. We show that open-population models can be used to estimate population size by developing a Horvitz-Thompson-type estimate of population size and an estimator of its variance. Our population size estimate keys on the probability of capture at each trap occasion and therefore is quite general and can be made a function of external covariates measured during the study. Here we define the estimator and investigate its bias, variance, and variance estimator via computer simulation. Computer simulations make extensive use of real data taken from a study of polar bears (Ursus maritimus) in the Beaufort Sea. The population size estimator is shown to be useful because it was negligibly biased in all situations studied. The variance estimator is shown to be useful in all situations, but caution is warranted in cases of extreme capture heterogeneity.
Multimodel ensembles of wheat growth: many models are better than one.
Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W; Rötter, Reimund P; Boote, Kenneth J; Ruane, Alex C; Thorburn, Peter J; Cammarano, Davide; Hatfield, Jerry L; Rosenzweig, Cynthia; Aggarwal, Pramod K; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richie; Grant, Robert F; Heng, Lee; Hooker, Josh; Hunt, Leslie A; Ingwersen, Joachim; Izaurralde, Roberto C; Kersebaum, Kurt Christian; Müller, Christoph; Kumar, Soora Naresh; Nendel, Claas; O'leary, Garry; Olesen, Jørgen E; Osborne, Tom M; Palosuo, Taru; Priesack, Eckart; Ripoche, Dominique; Semenov, Mikhail A; Shcherbak, Iurii; Steduto, Pasquale; Stöckle, Claudio O; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Travasso, Maria; Waha, Katharina; White, Jeffrey W; Wolf, Joost
2015-02-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models. © 2014 John Wiley & Sons Ltd.
Multimodel Ensembles of Wheat Growth: More Models are Better than One
NASA Technical Reports Server (NTRS)
Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alex C.; Thorburn, Peter J.; Cammarano, Davide;
2015-01-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.
Multimodel Ensembles of Wheat Growth: Many Models are Better than One
NASA Technical Reports Server (NTRS)
Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alexander C.; Thorburn, Peter J.; Cammarano, Davide;
2015-01-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop model scan give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 2438 for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.
NASA Astrophysics Data System (ADS)
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
Deng, Zhimin; Tian, Tianhai
2014-07-29
The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.
Chang, Howard H; Fuentes, Montserrat; Frey, H Christopher
2012-09-01
This paper describes a modeling framework for estimating the acute effects of personal exposure to ambient air pollution in a time series design. First, a spatial hierarchical model is used to relate Census tract-level daily ambient concentrations and simulated exposures for a subset of the study period. The complete exposure time series is then imputed for risk estimation. Modeling exposure via a statistical model reduces the computational burden associated with simulating personal exposures considerably. This allows us to consider personal exposures at a finer spatial resolution to improve exposure assessment and for a longer study period. The proposed approach is applied to an analysis of fine particulate matter of <2.5 μm in aerodynamic diameter (PM(2.5)) and daily mortality in the New York City metropolitan area during the period 2001-2005. Personal PM(2.5) exposures were simulated from the Stochastic Human Exposure and Dose Simulation. Accounting for exposure uncertainty, the authors estimated a 2.32% (95% posterior interval: 0.68, 3.94) increase in mortality per a 10 μg/m(3) increase in personal exposure to PM(2.5) from outdoor sources on the previous day. The corresponding estimates per a 10 μg/m(3) increase in PM(2.5) ambient concentration was 1.13% (95% confidence interval: 0.27, 2.00). The risks of mortality associated with PM(2.5) were also higher during the summer months.
Estimation of Graded Response Model Parameters Using MULTILOG.
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Describes an idiosyncracy of the MULTILOG (D. Thissen, 1991) parameter estimation process discovered during a simulation study involving the graded response model. A misordering reflected in boundary function location parameter estimates resulted in a large negative contribution to the true score followed by a large positive contribution. These…
Karim, Mohammad Ehsanul; Platt, Robert W
2017-06-15
Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main-effects logistic regression model. In practice, assumptions underlying such models may not hold and data-adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross-validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995-2008), to estimate the impact of beta-interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Niazi, A.; Bentley, L. R.; Hayashi, M.
2016-12-01
Geostatistical simulations are used to construct heterogeneous aquifer models. Optimally, such simulations should be conditioned with both lithologic and hydraulic data. We introduce an approach to condition lithologic geostatistical simulations of a paleo-fluvial bedrock aquifer consisting of relatively high permeable sandstone channels embedded in relatively low permeable mudstone using hydraulic data. The hydraulic data consist of two-hour single well pumping tests extracted from the public water well database for a 250-km2 watershed in Alberta, Canada. First, lithologic models of the entire watershed are simulated and conditioned with hard lithological data using transition probability - Markov chain geostatistics (TPROGS). Then, a segment of the simulation around a pumping well is used to populate a flow model (FEFLOW) with either sand or mudstone. The values of the hydraulic conductivity and specific storage of sand and mudstone are then adjusted to minimize the difference between simulated and actual pumping test data using the parameter estimation program PEST. If the simulated pumping test data do not adequately match the measured data, the lithologic model is updated by locally deforming the lithology distribution using the probability perturbation method and the model parameters are again updated with PEST. This procedure is repeated until the simulated and measured data agree within a pre-determined tolerance. The procedure is repeated for each well that has pumping test data. The method creates a local groundwater model that honors both the lithologic model and pumping test data and provides estimates of hydraulic conductivity and specific storage. Eventually, the simulations will be integrated into a watershed-scale groundwater model.
ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.
1999-03-01
ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less
An open-population hierarchical distance sampling model
Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,
2015-01-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
An open-population hierarchical distance sampling model.
Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott
2015-02-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
Warren E. Heilman; David Y. Hollinger; Xiuping Li; Xindi Bain; Shiyuan. Zhong
2010-01-01
Recently published albedo research has resulted in improved growing-season albedo estimates for forest and grassland vegetation. The impact of these improved estimates on the ability of climate models to simulate growing-season surface temperature patterns is unknown. We have developed a set of current-climate surface temperature scenarios for North America using the...
Modeling the frequency response of microwave radiometers with QUCS
NASA Astrophysics Data System (ADS)
Zonca, A.; Roucaries, B.; Williams, B.; Rubin, I.; D'Arcangelo, O.; Meinhold, P.; Lubin, P.; Franceschet, C.; Jahn, S.; Mennella, A.; Bersanelli, M.
2010-12-01
Characterization of the frequency response of coherent radiometric receivers is a key element in estimating the flux of astrophysical emissions, since the measured signal depends on the convolution of the source spectral emission with the instrument band shape. Laboratory Radio Frequency (RF) measurements of the instrument bandpass often require complex test setups and are subject to a number of systematic effects driven by thermal issues and impedance matching, particularly if cryogenic operation is involved. In this paper we present an approach to modeling radiometers bandpasses by integrating simulations and RF measurements of individual components. This method is based on QUCS (Quasi Universal Circuit Simulator), an open-source circuit simulator, which gives the flexibility of choosing among the available devices, implementing new analytical software models or using measured S-parameters. Therefore an independent estimate of the instrument bandpass is achieved using standard individual component measurements and validated analytical simulations. In order to automate the process of preparing input data, running simulations and exporting results we developed the Python package python-qucs and released it under GNU Public License. We discuss, as working cases, bandpass response modeling of the COFE and Planck Low Frequency Instrument (LFI) radiometers and compare results obtained with QUCS and with a commercial circuit simulator software. The main purpose of bandpass modeling in COFE is to optimize component matching, while in LFI they represent the best estimation of frequency response, since end-to-end measurements were strongly affected by systematic effects.
Wagner, Chad R.
2007-01-01
The use of one-dimensional hydraulic models currently is the standard method for estimating velocity fields through a bridge opening for scour computations and habitat assessment. Flood-flow contraction through bridge openings, however, is hydrodynamically two dimensional and often three dimensional. Although there is awareness of the utility of two-dimensional models to predict the complex hydraulic conditions at bridge structures, little guidance is available to indicate whether a one- or two-dimensional model will accurately estimate the hydraulic conditions at a bridge site. The U.S. Geological Survey, in cooperation with the North Carolina Department of Transportation, initiated a study in 2004 to compare one- and two-dimensional model results with field measurements at complex riverine and tidal bridges in North Carolina to evaluate the ability of each model to represent field conditions. The field data consisted of discharge and depth-averaged velocity profiles measured with an acoustic Doppler current profiler and surveyed water-surface profiles for two high-flow conditions. For the initial study site (U.S. Highway 13 over the Tar River at Greenville, North Carolina), the water-surface elevations and velocity distributions simulated by the one- and two-dimensional models showed appreciable disparity in the highly sinuous reach upstream from the U.S. Highway 13 bridge. Based on the available data from U.S. Geological Survey streamgaging stations and acoustic Doppler current profiler velocity data, the two-dimensional model more accurately simulated the water-surface elevations and the velocity distributions in the study reach, and contracted-flow magnitudes and direction through the bridge opening. To further compare the results of the one- and two-dimensional models, estimated hydraulic parameters (flow depths, velocities, attack angles, blocked flow width) for measured high-flow conditions were used to predict scour depths at the U.S. Highway 13 bridge by using established methods. Comparisons of pier-scour estimates from both models indicated that the scour estimates from the two-dimensional model were as much as twice the depth of the estimates from the one-dimensional model. These results can be attributed to higher approach velocities and the appreciable flow angles at the piers simulated by the two-dimensional model and verified in the field. Computed flood-frequency estimates of the 10-, 50-, 100-, and 500-year return-period floods on the Tar River at Greenville were also simulated with both the one- and two-dimensional models. The simulated water-surface profiles and velocity fields of the various return-period floods were used to compare the modeling approaches and provide information on what return-period discharges would result in road over-topping and(or) pressure flow. This information is essential in the design of new and replacement structures. The ability to accurately simulate water-surface elevations and velocity magnitudes and distributions at bridge crossings is essential in assuring that bridge plans balance public safety with the most cost-effective design. By compiling pertinent bridge-site characteristics and relating them to the results of several model-comparison studies, the framework for developing guidelines for selecting the most appropriate model for a given bridge site can be accomplished.
An analysis of simulated and observed storm characteristics
NASA Astrophysics Data System (ADS)
Benestad, R. E.
2010-09-01
A calculus-based cyclone identification (CCI) method has been applied to the most recent re-analysis (ERAINT) from the European Centre for Medium-range Weather Forecasts and results from regional climate model (RCM) simulations. The storm frequency for events with central pressure below a threshold value of 960-990hPa were examined, and the gradient wind from the simulated storm systems were compared with corresponding estimates from the re-analysis. The analysis also yielded estimates for the spatial extent of the storm systems, which was also included in the regional climate model cyclone evaluation. A comparison is presented between a number of RCMs and the ERAINT re-analysis in terms of their description of the gradient winds, number of cyclones, and spatial extent. Furthermore, a comparison between geostrophic wind estimated though triangules of interpolated or station measurements of SLP is presented. Wind still represents one of the more challenging variables to model realistically.
Kim, Sangroh; Yoshizumi, Terry T; Toncheva, Greta; Frush, Donald P; Yin, Fang-Fang
2010-03-01
The purpose of this study was to establish a dose estimation tool with Monte Carlo (MC) simulations. A 5-y-old paediatric anthropomorphic phantom was computed tomography (CT) scanned to create a voxelised phantom and used as an input for the abdominal cone-beam CT in a BEAMnrc/EGSnrc MC system. An X-ray tube model of the Varian On-Board Imager((R)) was built in the MC system. To validate the model, the absorbed doses at each organ location for standard-dose and low-dose modes were measured in the physical phantom with MOSFET detectors; effective doses were also calculated. In the results, the MC simulations were comparable to the MOSFET measurements. This voxelised phantom approach could produce a more accurate dose estimation than the stylised phantom method. This model can be easily applied to multi-detector CT dosimetry.
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-03-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.
NASA Technical Reports Server (NTRS)
Jones, D. W.
1971-01-01
The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.
Modelling soil erosion in a Mediterranean watershed: Comparison between SWAT and AnnAGNPS models.
Abdelwahab, O M M; Ricci, G F; De Girolamo, A M; Gentile, F
2018-06-20
In this study, the simulations generated by two of the most widely used hydrological basin-scale models, the Annualized Agricultural Non-Point Source (AnnAGNPS) and the Soil and Water Assessment Tool (SWAT), were compared in a Mediterranean watershed, the Carapelle (Apulia, Southern Italy). Input data requirements, time and efforts needed for input preparation, strength and weakness points of each model, ease of use and limitations were evaluated in order to give information to users. Models were calibrated and validated at monthly time scale for hydrology and sediment load using a four year period of observations (streamflow and suspended sediment concentrations). In the driest year, the specific sediment load measured at the outlet was 0.89 t ha -1 yr -1 , while the simulated values were 0.83 t ha -1 yr -1 and 1.99 t ha -1 yr -1 for SWAT and AnnAGNPS, respectively. In the wettest year, the specific measured sediment load was 7.45 t ha -1 yr -1 , and the simulated values were 8.27 t ha -1 yr -1 and 6.23 t ha -1 yr -1 for SWAT and AnnAGNPS, respectively. Both models showed from fair to a very good correlation between observed and simulated streamflow and satisfactory for sediment load. Results showed that most of the basin is under moderate (1.4-10 t ha -1 yr -1 ) and high-risk erosion (> 10 t ha -1 yr -1 ). The sediment yield predicted by the SWAT and AnnAGNPS models were compared with estimates of soil erosion simulated by models for Europe (PESERA and RUSLE2015). The average gross erosion estimated by the RUSLE2015 model (12.5 t ha -1 yr -1 ) resulted comparable with the average specific sediment yield estimated by SWAT (8.8 t ha -1 yr -1 ) and AnnAGNPS (5.6 t ha -1 yr -1 ), while it was found that the average soil erosion estimated by PESERA is lower than the other estimates (1.2 t ha -1 yr -1 ). Copyright © 2018 Elsevier Inc. All rights reserved.
Host Model Uncertainty in Aerosol Radiative Forcing Estimates - The AeroCom Prescribed Experiment
NASA Astrophysics Data System (ADS)
Stier, P.; Kinne, S.; Bellouin, N.; Myhre, G.; Takemura, T.; Yu, H.; Randles, C.; Chung, C. E.
2012-04-01
Anthropogenic and natural aerosol radiative effects are recognized to affect global and regional climate. However, even for the case of identical aerosol emissions, the simulated direct aerosol radiative forcings show significant diversity among the AeroCom models (Schulz et al., 2006). Our analysis of aerosol absorption in the AeroCom models indicates a larger diversity in the translation from given aerosol radiative properties (absorption optical depth) to actual atmospheric absorption than in the translation of a given atmospheric burden of black carbon to the radiative properties (absorption optical depth). The large diversity is caused by differences in the simulated cloud fields, radiative transfer, the relative vertical distribution of aerosols and clouds, and the effective surface albedo. This indicates that differences in host model (GCM or CTM hosting the aerosol module) parameterizations contribute significantly to the simulated diversity of aerosol radiative forcing. The magnitude of these host model effects in global aerosol model and satellites retrieved aerosol radiative forcing estimates cannot be estimated from the diagnostics of the "standard" AeroCom forcing experiments. To quantify the contribution of differences in the host models to the simulated aerosol radiative forcing and absorption we conduct the AeroCom Prescribed experiment, a simple aerosol model and satellite retrieval intercomparison with prescribed highly idealised aerosol fields. Quality checks, such as diagnostic output of the 3D aerosol fields as implemented in each model, ensure the comparability of the aerosol implementation in the participating models. The simulated forcing variability among the models and retrievals is a direct measure of the contribution of host model assumptions to the uncertainty in the assessment of the aerosol radiative effects. We will present the results from the AeroCom prescribed experiment with focus on the attribution to the simulated variability to parametric and structural model uncertainties. This work will help to prioritise areas for future model improvements and ultimately lead to uncertainty reduction.
Hydrologic model of the Modesto Region, California, 1960-2004
Phillips, Steven P.; Rewis, Diane L.; Traum, Jonathan A.
2015-01-01
The simulated exchange between groundwater and surface water was a small percentage of streamflow, typically ranging within a loss or gain of about 2 cubic feet per second per mile. The simulated exchange compared reasonably with limited independent estimates available, but substantial uncertainty is associated with these estimates.
USDA-ARS?s Scientific Manuscript database
Cotton (Gossypium hirsutum L.) yield losses by southern root-knot nematode [Meloidogyne incognita (Kofoid & White) Chitwood] (RKN) are usually estimated after significant damage has been caused. However, estimation of potential yield reduction before planting is possible by using crop simulation mod...
We estimated surface salinity flux and solar penetration from satellite data, and performed model simulations to examine the impact of including the satellite estimates on temperature, salinity, and dissolved oxygen distributions on the Louisiana continental shelf (LCS) near the ...
Model methodology for estimating pesticide concentration extremes based on sparse monitoring data
Vecchia, Aldo V.
2018-03-22
This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.
Evaluating uses of data mining techniques in propensity score estimation: a simulation study.
Setoguchi, Soko; Schneeweiss, Sebastian; Brookhart, M Alan; Glynn, Robert J; Cook, E Francis
2008-06-01
In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. We simulated data for a hypothetical cohort study (n = 2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR] = -0.3, p = 0.1) but increased SE (COR = 0.7, p < 0.001). Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE.
Main steam line break accident simulation of APR1400 using the model of ATLAS facility
NASA Astrophysics Data System (ADS)
Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.
2018-02-01
A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.
Sayers, A; Heron, J; Smith, Adac; Macdonald-Wallis, C; Gilthorpe, M S; Steele, F; Tilling, K
2017-02-01
There is a growing debate with regards to the appropriate methods of analysis of growth trajectories and their association with prospective dependent outcomes. Using the example of childhood growth and adult BP, we conducted an extensive simulation study to explore four two-stage and two joint modelling methods, and compared their bias and coverage in estimation of the (unconditional) association between birth length and later BP, and the association between growth rate and later BP (conditional on birth length). We show that the two-stage method of using multilevel models to estimate growth parameters and relating these to outcome gives unbiased estimates of the conditional associations between growth and outcome. Using simulations, we demonstrate that the simple methods resulted in bias in the presence of measurement error, as did the two-stage multilevel method when looking at the total (unconditional) association of birth length with outcome. The two joint modelling methods gave unbiased results, but using the re-inflated residuals led to undercoverage of the confidence intervals. We conclude that either joint modelling or the simpler two-stage multilevel approach can be used to estimate conditional associations between growth and later outcomes, but that only joint modelling is unbiased with nominal coverage for unconditional associations.
NASA Astrophysics Data System (ADS)
Chatani, Satoru; Matsunaga, Sou N.; Nakatsuka, Seiji
2015-11-01
A new gridded database has been developed to estimate the amount of isoprene, monoterpene, and sesquiterpene emitted from all the broadleaf and coniferous trees in Japan with the Model of Emissions of Gases and Aerosols from Nature (MEGAN). This database reflects the vegetation specific to Japan more accurately than existing ones. It estimates much lower isoprene emitted from other vegetation than trees, and higher sesquiterpene emissions mainly emitted from Cryptomeria japonica, which is the most abundant plant type in Japan. Changes in biogenic emissions result in the decrease in ambient ozone and increase in organic aerosol simulated by the air quality simulation over the Tokyo Metropolitan Area in Japan. Although newly estimated biogenic emissions contribute to a better model performance on overestimated ozone and underestimated organic aerosol, they are not a single solution to solve problems associated with the air quality simulation.
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Modeling and simulating industrial land-use evolution in Shanghai, China
NASA Astrophysics Data System (ADS)
Qiu, Rongxu; Xu, Wei; Zhang, John; Staenz, Karl
2018-01-01
This study proposes a cellular automata-based Industrial and Residential Land Use Competition Model to simulate the dynamic spatial transformation of industrial land use in Shanghai, China. In the proposed model, land development activities in a city are delineated as competitions among different land-use types. The Hedonic Land Pricing Model is adopted to implement the competition framework. To improve simulation results, the Land Price Agglomeration Model was devised to simulate and adjust classic land price theory. A new evolutionary algorithm-based parameter estimation method was devised in place of traditional methods. Simulation results show that the proposed model closely resembles actual land transformation patterns and the model can not only simulate land development, but also redevelopment processes in metropolitan areas.
Hannula, Manne; Huttunen, Kerttu; Koskelo, Jukka; Laitinen, Tomi; Leino, Tuomo
2008-01-01
In this study, the performances of artificial neural network (ANN) analysis and multilinear regression (MLR) model-based estimation of heart rate were compared in an evaluation of individual cognitive workload. The data comprised electrocardiography (ECG) measurements and an evaluation of cognitive load that induces psychophysiological stress (PPS), collected from 14 interceptor fighter pilots during complex simulated F/A-18 Hornet air battles. In our data, the mean absolute error of the ANN estimate was 11.4 as a visual analog scale score, being 13-23% better than the mean absolute error of the MLR model in the estimation of cognitive workload.
A kinetic energy model of two-vehicle crash injury severity.
Sobhani, Amir; Young, William; Logan, David; Bahrololoom, Sareh
2011-05-01
An important part of any model of vehicle crashes is the development of a procedure to estimate crash injury severity. After reviewing existing models of crash severity, this paper outlines the development of a modelling approach aimed at measuring the injury severity of people in two-vehicle road crashes. This model can be incorporated into a discrete event traffic simulation model, using simulation model outputs as its input. The model can then serve as an integral part of a simulation model estimating the crash potential of components of the traffic system. The model is developed using Newtonian Mechanics and Generalised Linear Regression. The factors contributing to the speed change (ΔV(s)) of a subject vehicle are identified using the law of conservation of momentum. A Log-Gamma regression model is fitted to measure speed change (ΔV(s)) of the subject vehicle based on the identified crash characteristics. The kinetic energy applied to the subject vehicle is calculated by the model, which in turn uses a Log-Gamma Regression Model to estimate the Injury Severity Score of the crash from the calculated kinetic energy, crash impact type, presence of airbag and/or seat belt and occupant age. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Comyn-Platt, Edward; Clark, Douglas; Blyth, Eleanor
2016-04-01
The UK is required to provide accurate estimates of the UK greenhouse gas (GHG; CO2, CH4 and N2O) emissions for the UNFCCC (United Nations Framework Convention on Climate Change). Process based land surface models (LSMs), such as the Joint UK Land Environment Simulator (JULES), attempt to provide such estimates based on environmental (e.g. land use and soil type) and meteorological conditions. The standard release of JULES focusses on the water and carbon cycles, however, it has long been suggested that a coupled carbon-nitrogen scheme could enhance simulations. This is of particular importance when estimating agricultural emission inventories where the carbon cycle is effectively managed via the human application of nitrogen based fertilizers. JULES-ECOSSE-FUN (JEF) links JULES with the Estimation of Carbon in Organic Soils - Sequestration and Emission (ECOSSE) model and the Fixation and Uptake of Nitrogen (FUN) model as a means of simulating C:N coupling. This work presents simulations from the standard release of JULES and the most recent incarnation of the JEF coupled system at the point and field scale. Various configurations of JULES and JEF were calibrated and fine-tuned based on comparisons with observations from three UK field campaigns (Crichton, Harwood Forest and Brattleby) specifically chosen to represent the managed vegetation types that cover the UK. The campaigns included flux tower and chamber measurements of CO2, CH4 and N2O amongst other meteorological parameters and records of land management such as application of fertilizer and harvest date at the agricultural sites. Based on the results of these comparisons, JULES and/or JEF will be used to provide simulations on the regional and national scales in order to provide improved estimates of the total UK emission inventory.
Multisite Evaluation of APEX for Water Quality: II. Regional Parameterization.
Nelson, Nathan O; Baffaut, Claire; Lory, John A; Anomaa Senaviratne, G M M M; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S
2017-11-01
Phosphorus (P) Index assessment requires independent estimates of long-term average annual P loss from fields, representing multiple climatic scenarios, management practices, and landscape positions. Because currently available measured data are insufficient to evaluate P Index performance, calibrated and validated process-based models have been proposed as tools to generate the required data. The objectives of this research were to develop a regional parameterization for the Agricultural Policy Environmental eXtender (APEX) model to estimate edge-of-field runoff, sediment, and P losses in restricted-layer soils of Missouri and Kansas and to assess the performance of this parameterization using monitoring data from multiple sites in this region. Five site-specific calibrated models (SSCM) from within the region were used to develop a regionally calibrated model (RCM), which was further calibrated and validated with measured data. Performance of the RCM was similar to that of the SSCMs for runoff simulation and had Nash-Sutcliffe efficiency (NSE) > 0.72 and absolute percent bias (|PBIAS|) < 18% for both calibration and validation. The RCM could not simulate sediment loss (NSE < 0, |PBIAS| > 90%) and was particularly ineffective at simulating sediment loss from locations with small sediment loads. The RCM had acceptable performance for simulation of total P loss (NSE > 0.74, |PBIAS| < 30%) but underperformed the SSCMs. Total P-loss estimates should be used with caution due to poor simulation of sediment loss. Although we did not attain our goal of a robust regional parameterization of APEX for estimating sediment and total P losses, runoff estimates with the RCM were acceptable for P Index evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Nakanishi, Allen S.; Lilly, Michael R.
1998-01-01
MODFLOW, a finite-difference model of ground-water flow, was used to simulate the flow of water between the aquifer and the Chena River at Fort Wainwright, Alaska. The model was calibrated by comparing simulated ground-water hydrographs to those recorded in wells during periods of fluctuating river levels. The best fit between simulated and observed hydrographs occurred for the following: 20 feet per day for vertical hydraulic conductivity, 400 feet per day for horizontal hydraulic conductivity, 1:20 for anisotropy (vertical to horizontal hydraulic conductivity), and 350 per feet for riverbed conductance. These values include a 30 percent adjustment for geometry effects. The estimated values for hydraulic conductivities of the alluvium are based on assumed values of 0.25 for specific yield and 0.000001 per foot for specific storage of the alluvium; the values assumed for bedrock are 0.1 foot per day horizontal hydraulic conductivity, 0.005 foot per day vertical hydraulic conductivity, and 0.0000001 per foot for specific storage. The resulting diffusivity for the alluvial aquifer is 1,600 feet per day. The estimated values of these hydraulic properties are nearly proportional to the assumed value of specific yield. These values were not found to be sensitive to the assumed values for bedrock. The hydrologic parameters estimated using the cross-sectional model are only valid when taken in context with the other values (both estimated and assumed) used in this study. The model simulates horizontal and vertical flow directions near the river during periods of varying river stage. This information is useful for interpreting bank-storage effects, including the flow of contaminants in the aquifer near the river.
Satellite rainfall retrieval by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.
1986-01-01
The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.
Holtschlag, David J.
2009-01-01
Two-dimensional hydrodynamic and transport models were applied to a 34-mile reach of the Ohio River from Cincinnati, Ohio, upstream to Meldahl Dam near Neville, Ohio. The hydrodynamic model was based on the generalized finite-element hydrodynamic code RMA2 to simulate depth-averaged velocities and flow depths. The generalized water-quality transport code RMA4 was applied to simulate the transport of vertically mixed, water-soluble constituents that have a density similar to that of water. Boundary conditions for hydrodynamic simulations included water levels at the U.S. Geological Survey water-level gaging station near Cincinnati, Ohio, and flow estimates based on a gate rating at Meldahl Dam. Flows estimated on the basis of the gate rating were adjusted with limited flow-measurement data to more nearly reflect current conditions. An initial calibration of the hydrodynamic model was based on data from acoustic Doppler current profiler surveys and water-level information. These data provided flows, horizontal water velocities, water levels, and flow depths needed to estimate hydrodynamic parameters related to channel resistance to flow and eddy viscosity. Similarly, dye concentration measurements from two dye-injection sites on each side of the river were used to develop initial estimates of transport parameters describing mixing and dye-decay characteristics needed for the transport model. A nonlinear regression-based approach was used to estimate parameters in the hydrodynamic and transport models. Parameters describing channel resistance to flow (Manning’s “n”) were estimated in areas of deep and shallow flows as 0.0234, and 0.0275, respectively. The estimated RMA2 Peclet number, which is used to dynamically compute eddy-viscosity coefficients, was 38.3, which is in the range of 15 to 40 that is typically considered appropriate. Resulting hydrodynamic simulations explained 98.8 percent of the variability in depth-averaged flows, 90.0 percent of the variability in water levels, 93.5 percent of the variability in flow depths, and 92.5 percent of the variability in velocities. Estimates of the water-quality-transport-model parameters describing turbulent mixing characteristics converged to different values for the two dye-injection reaches. For the Big Indian Creek dye-injection study, an RMA4 Peclet number of 37.2 was estimated, which was within the recommended range of 15 to 40, and similar to the RMA2 Peclet number. The estimated dye-decay coefficient was 0.323. Simulated dye concentrations explained 90.2 percent of the variations in measured dye concentrations for the Big Indian Creek injection study. For the dye-injection reach starting downstream from Twelvemile Creek, however, an RMA4 Peclet number of 173 was estimated, which is far outside the recommended range. Simulated dye concentrations were similar to measured concentration distributions at the first four transects downstream from the dye-injection site that were considered vertically mixed. Farther downstream, however, simulated concentrations did not match the attenuation of maximum concentrations or cross-channel transport of dye that were measured. The difficulty of determining a consistent RMA4 Peclet was related to the two-dimension model assumption that velocity distributions are closely approximated by their depth-averaged values. Analysis of velocity data showed significant variations in velocity direction with depth in channel reaches with curvature. Channel irregularities (including curvatures, depth irregularities, and shoreline variations) apparently produce transverse currents that affect the distribution of constituents, but are not fully accounted for in a two-dimensional model. The two-dimensional flow model, using channel resistance to flow parameters of 0.0234 and 0.0275 for deep and shallow areas, respectively, and an RMA2 Peclet number of 38.3, and the RMA4 transport model with a Peclet number of 37.2, may have utility for emergency-planning purposes. Emergency-response efforts would be enhanced by continuous streamgaging records downstream from Meldahl Dam, real-time water-quality monitoring, and three-dimensional modeling. Decay coefficients are constituent specific.
2011-01-01
Background The identification of genes or quantitative trait loci that are expressed in response to different environmental factors such as temperature and light, through functional mapping, critically relies on precise modeling of the covariance structure. Previous work used separable parametric covariance structures, such as a Kronecker product of autoregressive one [AR(1)] matrices, that do not account for interaction effects of different environmental factors. Results We implement a more robust nonparametric covariance estimator to model these interactions within the framework of functional mapping of reaction norms to two signals. Our results from Monte Carlo simulations show that this estimator can be useful in modeling interactions that exist between two environmental signals. The interactions are simulated using nonseparable covariance models with spatio-temporal structural forms that mimic interaction effects. Conclusions The nonparametric covariance estimator has an advantage over separable parametric covariance estimators in the detection of QTL location, thus extending the breadth of use of functional mapping in practical settings. PMID:21269481
Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I
2012-12-21
A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.
Tradeoffs among watershed model calibration targets for parameter estimation
Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
Ge, Zhenpeng; Wang, Yi
2017-04-20
Molecular dynamics simulations of nanoparticles (NPs) are increasingly used to study their interactions with various biological macromolecules. Such simulations generally require detailed knowledge of the surface composition of the NP under investigation. Even for some well-characterized nanoparticles, however, this knowledge is not always available. An example is nanodiamond, a nanoscale diamond particle with surface dominated by oxygen-containing functional groups. In this work, we explore using the harmonic restraint method developed by Venable et al., to estimate the surface charge density (σ) of nanodiamonds. Based on the Gouy-Chapman theory, we convert the experimentally determined zeta potential of a nanodiamond to an effective charge density (σ eff ), and then use the latter to estimate σ via molecular dynamics simulations. Through scanning a series of nanodiamond models, we show that the above method provides a straightforward protocol to determine the surface charge density of relatively large (> ∼100 nm) NPs. Overall, our results suggest that despite certain limitation, the above protocol can be readily employed to guide the model construction for MD simulations, which is particularly useful when only limited experimental information on the NP surface composition is available to a modeler.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downing, D.J.
1993-10-01
This paper discusses Carol Gotway`s paper, ``The Use of Conditional Simulation in Nuclear Waste Site Performance Assessment.`` The paper centers on the use of conditional simulation and the use of geostatistical methods to simulate an entire field of values for subsequent use in a complex computer model. The issues of sampling designs for geostatistics, semivariogram estimation and anisotropy, turning bands method for random field generation, and estimation of the comulative distribution function are brought out.
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
Mixed effects versus fixed effects modelling of binary data with inter-subject variability.
Murphy, Valda; Dunne, Adrian
2005-04-01
The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.
A simulation model for probabilistic analysis of Space Shuttle abort modes
NASA Technical Reports Server (NTRS)
Hage, R. T.
1993-01-01
A simulation model which was developed to provide a probabilistic analysis tool to study the various space transportation system abort mode situations is presented. The simulation model is based on Monte Carlo simulation of an event-tree diagram which accounts for events during the space transportation system's ascent and its abort modes. The simulation model considers just the propulsion elements of the shuttle system (i.e., external tank, main engines, and solid boosters). The model was developed to provide a better understanding of the probability of occurrence and successful completion of abort modes during the vehicle's ascent. The results of the simulation runs discussed are for demonstration purposes only, they are not official NASA probability estimates.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.
2014-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2016-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Simulation analysis of photometric data for attitude estimation of unresolved space objects
NASA Astrophysics Data System (ADS)
Du, Xiaoping; Gou, Ruixin; Liu, Hao; Hu, Heng; Wang, Yang
2017-10-01
The attitude information acquisition of unresolved space objects, such as micro-nano satellites and GEO objects under the way of ground-based optical observations, is a challenge to space surveillance. In this paper, a useful method is proposed to estimate the SO attitude state according to the simulation analysis of photometric data in different attitude states. The object shape model was established and the parameters of the BRDF model were determined, then the space object photometric model was established. Furthermore, the photometric data of space objects in different states are analyzed by simulation and the regular characteristics of the photometric curves are summarized. The simulation results show that the photometric characteristics are useful for attitude inversion in a unique way. Thus, a new idea is provided for space object identification in this paper.
Using the power balance model to simulate cross-country skiing on varying terrain.
Moxnes, John F; Sandbakk, Oyvind; Hausken, Kjell
2014-01-01
The current study adapts the power balance model to simulate cross-country skiing on varying terrain. We assumed that the skier's locomotive power at a self-chosen pace is a function of speed, which is impacted by friction, incline, air drag, and mass. An elite male skier's position along the track during ski skating was simulated and compared with his experimental data. As input values in the model, air drag and friction were estimated from the literature based on the skier's mass, snow conditions, and speed. We regard the fit as good, since the difference in racing time between simulations and measurements was 2 seconds of the 815 seconds racing time, with acceptable fit both in uphill and downhill terrain. Using this model, we estimated the influence of changes in various factors such as air drag, friction, and body mass on performance. In conclusion, the power balance model with locomotive power as a function of speed was found to be a valid tool for analyzing performance in cross-country skiing.
SIMYAR: a cable-yarding simulation model.
R.J. McGaughey; R.H. Twito
1987-01-01
A skyline-logging simulation model designed to help planners evaluate potential yarding options and alternative harvest plans is presented. The model, called SIMYAR, uses information about the timber stand, yarding equipment, and unit geometry to estimate yarding co stand productivity for a particular operation. The costs of felling, bucking, loading, and hauling are...
Evaluation of methodology for detecting/predicting migration of forest species
Dale S. Solomon; William B. Leak
1996-01-01
Available methods for analyzing migration of forest species are evaluated, including simulation models, remeasured plots, resurveys, pollen/vegetation analysis, and age/distance trends. Simulation models have provided some of the most drastic estimates of species changes due to predicted changes in global climate. However, these models require additional testing...
The unsaturated or vadose zone provides a complex system for the simulation of water movement and contaminant transport and fate. Numerous models are available for performing simulations related to the movement of water. There exists extensive documentation of these models. Ho...
USDA-ARS?s Scientific Manuscript database
DayCent is a biogeochemical model of intermediate complexity used to simulate carbon, nutrient, and greenhouse gas fluxes for crop, grassland, forest, and savanna ecosystems. Model inputs include: soil texture and hydraulic properties, current and historical land use, vegetation cover, daily maximum...
NASA Astrophysics Data System (ADS)
Zhang, Q.; Yao, T.
2016-12-01
The climate is affected by the land surface through regulating the exchange of mass and energy with the atmosphere. The energy that reaches the land surface has three pathways: (1) reflected into atmosphere; (2) absorbed for photosynthesis; and (3) discarded as latent and sensible heat or emitted as fluorescence. Vegetation removes CO2 from the atmosphere during the process of photosynthesis, but also releases CO2 back into the atmosphere through the process of respiration. The complex set of vegetation-soil-atmosphere interactions requires that a realistic land-surface parameterization be included in any climate model or general circulation model (GCM) to accurately simulate canopy photosynthesis and stomatal conductance.We retrieve fraction of PAR absorbed by chlorophyll (fAPARchl) with an advanced canopy-leaf-soil-snow-water coupled radiative transfer model. Most ecological models and land-surface models that simulate vegetation GPP with remote sensing data utilize fraction of PAR absorbed by the whole canopy (fAPARcanopy). However, only the PAR absorbed by chlorophyll is potentially available for photosynthesis since the PAR absorbed by non-photosynthetic vegetation section (NPV) of the canopy is not used for photosynthesis. Therefore, fAPARchl (rather than fAPARcanopy) should be utilized to estimate fAPAR for photosynthesis (fAPARPSN), and thus in GPP simulation. Globally selected sites include those sites in tropical, Arctic/boreal, coastal, and wetland-dominant regions. The fAPARchl and fAPARcanopy products for a surrounding area 50 km x 50 km of each site are mapped. The fAPARchl is utilized to estimate GPP, and compared to tower flux GPP for validation. The GPP estimation performance with fAPARchl is also compared with the GPP estimation performance with MOD15A2 FPAR. The fAPARchl product is further implemented into ecological models and land-surface models to simulate vegetation GPP. NDVI is the other proxy of fAPARPSN in GPP estimation. We quantify the uncertainties in estimates of fAPARPSN when approximated with fAPARcanopy and NDVI. The uncertainties are significant and vary spatially, temporally, and with plant functional types.
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.; Archfield, S. A.; Farmer, W. H.; Kiang, J. E.
2014-12-01
The U.S. Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the continental US. The portion of the NHM located within the Gulf Coastal Plains and Ozarks Landscape Conservation Cooperative (GCPO LCC) is being used to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models. The GCPO LCC covers part or all of 12 states and 5 sub-geographies, totaling approximately 726,000 km2, and is centered on the lower Mississippi Alluvial Valley. A total of 346 USGS streamgages in the GCPO LCC region were selected to evaluate the performance of this new calibration methodology for the period 1980 to 2013. Initially, the physically-based models are calibrated to measured streamflow data to provide a baseline for comparison. An enhanced calibration procedure then is used to calibrate the physically-based models in the gaged and ungaged areas of the GCPO LCC using statistically-based estimates of streamflow. For this application, the calibration procedure is adjusted to address the limitations of the statistically generated time series to reproduce measured streamflow in gaged basins, primarily by incorporating error and bias estimates. As part of this effort, estimates of uncertainty in the model simulations are also computed for the gaged and ungaged watersheds.
Seng, Bunrith; Kaneko, Hidehiro; Hirayama, Kimiaki; Katayama-Hirayama, Keiko
2012-01-01
This paper presents a mathematical model of vertical water movement and a performance evaluation of the model in static pile composting operated with neither air supply nor turning. The vertical moisture content (MC) model was developed with consideration of evaporation (internal and external evaporation), diffusion (liquid and vapour diffusion) and percolation, whereas additional water from substrate decomposition and irrigation was not taken into account. The evaporation term in the model was established on the basis of reference evaporation of the materials at known temperature, MC and relative humidity of the air. Diffusion of water vapour was estimated as functions of relative humidity and temperature, whereas diffusion of liquid water was empirically obtained from experiment by adopting Fick's law. Percolation was estimated by following Darcy's law. The model was applied to a column of composting wood chips with an initial MC of 60%. The simulation program was run for four weeks with calculation span of 1 s. The simulated results were in reasonably good agreement with the experimental results. Only a top layer (less than 20 cm) had a considerable MC reduction; the deeper layers were comparable to the initial MC, and the bottom layer was higher than the initial MC. This model is a useful tool to estimate the MC profile throughout the composting period, and could be incorporated into biodegradation kinetic simulation of composting.
Simulation of devices mobility to estimate wireless channel quality metrics in 5G networks
NASA Astrophysics Data System (ADS)
Orlov, Yu.; Fedorov, S.; Samuylov, A.; Gaidamaka, Yu.; Molchanov, D.
2017-07-01
The problem of channel quality estimation for devices in a wireless 5G network is formulated. As a performance metrics of interest we choose the signal-to-interference-plus-noise ratio, which depends essentially on the distance between the communicating devices. A model with a plurality of moving devices in a bounded three-dimensional space and a simulation algorithm to determine the distances between the devices for a given motion model are devised.
Modeling the October 2005 lahars at Panabaj (Guatemala)
NASA Astrophysics Data System (ADS)
Charbonnier, S. J.; Connor, C. B.; Connor, L. J.; Sheridan, M. F.; Oliva Hernández, J. P.; Richardson, J. A.
2018-01-01
An extreme rainfall event in October of 2005 triggered two deadly lahars on the flanks of Tolimán volcano (Guatemala) that caused many fatalities in the village of Panabaj. We mapped the deposits of these lahars, then developed computer simulations of the lahars using the geologic data and compared simulated area inundated by the flows to mapped area inundated. Computer simulation of the two lahars was dramatically improved after calibration with geological data. Specifically, detailed field measurements of flow inundation area, flow thickness, flow direction, and velocity estimates, collected after lahar emplacement, were used to calibrate the rheological input parameters for the models, including deposit volume, yield strength, sediment and water concentrations, and Manning roughness coefficients. Simulations of the two lahars, with volumes of 240,200 ± 55,400 and 126,000 ± 29,000 m3, using the FLO-2D computer program produced models of lahar runout within 3% of measured runouts and produced reasonable estimates of flow thickness and velocity along the lengths of the simulated flows. We compare areas inundated using the Jaccard fit, model sensitivity, and model precision metrics, all related to Bayes' theorem. These metrics show that false negatives (areas inundated by the observed lahar where not simulated) and false positives (areas not inundated by the observed lahar where inundation was simulated) are reduced using a model calibrated by rheology. The metrics offer a procedure for tuning model performance that will enhance model accuracy and make numerical models a more robust tool for natural hazard reduction.
Accurate estimates for North American background (NAB) ozone (O3) in surface air over the United States are needed for setting and implementing an attainable national O3 standard. These estimates rely on simulations with atmospheric chemistry-transport models that set North Amer...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, Kriston P.; Sprik, Samuel J.; Tamburello, David A.
The U.S. Department of Energy (DOE) has developed a vehicle framework model to simulate fuel cell-based light-duty vehicle operation for various hydrogen storage systems. This transient model simulates the performance of the storage system, fuel cell, and vehicle for comparison to DOE’s Technical Targets using four drive cycles/profiles. Chemical hydrogen storage models have been developed for the Framework model for both exothermic and endothermic materials. Despite the utility of such models, they require that material researchers input system design specifications that cannot be easily estimated. To address this challenge, a design tool has been developed that allows researchers to directlymore » enter kinetic and thermodynamic chemical hydrogen storage material properties into a simple sizing module that then estimates the systems parameters required to run the storage system model. Additionally, this design tool can be used as a standalone executable file to estimate the storage system mass and volume outside of the framework model and compare it to the DOE Technical Targets. These models will be explained and exercised with existing hydrogen storage materials.« less
NASA Astrophysics Data System (ADS)
Czerepicki, A.; Koniak, M.
2017-06-01
The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
NASA Astrophysics Data System (ADS)
Esrael, D.; Kacem, M.; Benadda, B.
2017-07-01
We investigate how the simulation of the venting/soil vapour extraction (SVE) process is affected by the mass transfer coefficient, using a model comprising five partial differential equations describing gas flow and mass conservation of phases and including an expression accounting for soil saturation conditions. In doing so, we test five previously reported quations for estimating the non-aqueous phase liquid (NAPL)/gas initial mass transfer coefficient and evaluate an expression that uses a reference NAPL saturation. Four venting/SVE experiments utilizing a sand column are performed with dry and non-saturated sand at low and high flow rates, and the obtained experimental results are subsequently simulated, revealing that hydrodynamic dispersion cannot be neglected in the estimation of the mass transfer coefficient, particularly in the case of low velocities. Among the tested models, only the analytical solution of a convection-dispersion equation and the equation proposed herein are suitable for correctly modelling the experimental results, with the developed model representing the best choice for correctly simulating the experimental results and the tailing part of the extracted gas concentration curve.
Freight Transportation Energy Use : Volume 1. Summary and Baseline Results.
DOT National Transportation Integrated Search
1978-07-01
The overall design of the TSC Freight Energy Model is presented. A hierarchical modeling strategy is used, in which detailed modal simulators estimate the performance characteristics of transportation network elements, and the estimates are input to ...
NASA Astrophysics Data System (ADS)
Zhang, J.; Fang, N. Z.
2017-12-01
A potential flood forecast system is under development for the Upper Trinity River Basin (UTRB) in North Central of Texas using the WRF-Hydro model. The Routing Application for the Parallel Computation of Discharge (RAPID) is utilized as channel routing module to simulate streamflow. Model performance analysis was conducted based on three quantitative precipitation estimates (QPE): the North Land Data Assimilation System (NLDAS) rainfall, the Multi-Radar Multi-Sensor (MRMS) QPE and the National Centers for Environmental Prediction (NCEP) quality-controlled stage IV estimates. Prior to hydrologic simulation, QPE performance is assessed on two time scales (daily and hourly) using the Community Collaborative Rain, Hail and Snow Network (CoCoRaHS) and Hydrometeorological Automated Data System (HADS) hourly products. The calibrated WRF-Hydro model was then evaluated by comparing the simulated against the USGS observed using various QPE products. The results imply that the NCEP stage IV estimates have the best accuracy among the three QPEs on both time scales, while the NLDAS rainfall performs poorly because of its coarse spatial resolution. Furthermore, precipitation bias demonstrates pronounced impact on flood forecasting skills, as the root mean squared errors are significantly reduced by replacing NLDAS rainfall with NCEP stage IV estimates. This study also demonstrates that accurate simulated results can be achieved when initial soil moisture values are well understood in the WRF-Hydro model. Future research effort will therefore be invested on incorporating data assimilation with focus on initial states of the soil properties for UTRB.
Deletion Diagnostics for Alternating Logistic Regressions
Preisser, John S.; By, Kunthel; Perin, Jamie; Qaqish, Bahjat F.
2013-01-01
Deletion diagnostics are introduced for the regression analysis of clustered binary outcomes estimated with alternating logistic regressions, an implementation of generalized estimating equations (GEE) that estimates regression coefficients in a marginal mean model and in a model for the intracluster association given by the log odds ratio. The diagnostics are developed within an estimating equations framework that recasts the estimating functions for association parameters based upon conditional residuals into equivalent functions based upon marginal residuals. Extensions of earlier work on GEE diagnostics follow directly, including computational formulae for one-step deletion diagnostics that measure the influence of a cluster of observations on the estimated regression parameters and on the overall marginal mean or association model fit. The diagnostic formulae are evaluated with simulations studies and with an application concerning an assessment of factors associated with health maintenance visits in primary care medical practices. The application and the simulations demonstrate that the proposed cluster-deletion diagnostics for alternating logistic regressions are good approximations of their exact fully iterated counterparts. PMID:22777960
Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar
Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...
2016-10-18
Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less
Jeton, Anne E.; Maurer, Douglas K.
2011-01-01
The effect that land use may have on streamflow in the Carson River, and ultimately its impact on downstream users can be evaluated by simulating precipitation-runoff processes and estimating groundwater inflow in the middle Carson River in west-central Nevada. To address these concerns, the U.S. Geological Survey, in cooperation with the Bureau of Reclamation, began a study in 2008 to evaluate groundwater flow in the Carson River basin extending from Eagle Valley to Churchill Valley, called the middle Carson River basin in this report. This report documents the development and calibration of 12 watershed models and presents model results and the estimated mean annual water budgets for the modeled watersheds. This part of the larger middle Carson River study will provide estimates of runoff tributary to the Carson River and the potential for groundwater inflow (defined here as that component of recharge derived from percolation of excess water from the soil zone to the groundwater reservoir). The model used for the study was the U.S. Geological Survey's Precipitation-Runoff Modeling System, a physically based, distributed-parameter model designed to simulate precipitation and snowmelt runoff as well as snowpack accumulation and snowmelt processes. Models were developed for 2 perennial watersheds in Eagle Valley having gaged daily mean runoff, Ash Canyon Creek and Clear Creek, and for 10 ephemeral watersheds in the Dayton Valley and Churchill Valley hydrologic areas. Model calibration was constrained by daily mean runoff for the 2 perennial watersheds and for the 10 ephemeral watersheds by limited indirect runoff estimates and by mean annual runoff estimates derived from empirical methods. The models were further constrained by limited climate data adjusted for altitude differences using annual precipitation volumes estimated in a previous study. The calibration periods were water years 1980-2007 for Ash Canyon Creek, and water years 1991-2007 for Clear Creek. To allow for water budget comparisons to the ephemeral models, the two perennial models were then run from 1980 to 2007, the time period constrained somewhat by the later record for the high-altitude climate station used in the simulation. The daily mean values of precipitation, runoff, evapotranspiration, and groundwater inflow simulated from the watershed models were summed to provide mean annual rates and volumes derived from each year of the simulation. Mean annual bias for the calibration period for Ash Canyon Creek and Clear Creek watersheds was within 6 and 3 percent, and relative errors were about 18 and -2 percent, respectively. For the 1980-2007 period of record, mean recharge efficiency and runoff efficiency (percentage of precipitation as groundwater inflow and runoff) averaged 7 and 39 percent, respectively, for Ash Canyon Creek, and 8 and 31 percent, respectively, for Clear Creek. For this same period, groundwater inflow volumes averaged about 500 acre-feet for Ash Canyon and 1,200 acre-feet for Clear Creek. The simulation period for the ephemeral watersheds ranged from water years 1978 to 2007. Mean annual simulated precipitation ranged from 6 to 11 inches. Estimates of recharge efficiency for the ephemeral watersheds ranged from 3 percent for Eureka Canyon to 7 percent for Eldorado Canyon. Runoff efficiency ranged from 7 percent for Eureka Canyon and 15 percent at Brunswick Canyon. For the 1978-2007 period, mean annual groundwater inflow volumes ranged from about 40 acre-feet for Eureka Canyon to just under 5,000 acre-feet for Churchill Canyon watershed. Watershed model results indicate significant interannual variability in the volumes of groundwater inflow caused by climate variations. For most of the modeled watersheds, little to no groundwater inflow was simulated for years with less than 8 inches of precipitation, unless those years were preceded by abnormally high precipitation years with significant subsurface storage carryover.
Microstructure and hydrogen bonding in water-acetonitrile mixtures.
Mountain, Raymond D
2010-12-16
The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.
Chai, Chen; Wong, Yiik Diew; Wang, Xuesong
2017-07-01
This paper proposes a simulation-based approach to estimate safety impact of driver cognitive failures and driving errors. Fuzzy Logic, which involves linguistic terms and uncertainty, is incorporated with Cellular Automata model to simulate decision-making process of right-turn filtering movement at signalized intersections. Simulation experiments are conducted to estimate the relationships between cognitive failures and driving errors with safety performance. Simulation results show Different types of cognitive failures are found to have varied relationship with driving errors and safety performance. For right-turn filtering movement, cognitive failures are more likely to result in driving errors with denser conflicting traffic stream. Moreover, different driving errors are found to have different safety impacts. The study serves to provide a novel approach to linguistically assess cognitions and replicate decision-making procedures of the individual driver. Compare to crash analysis, the proposed FCA model allows quantitative estimation of particular cognitive failures, and the impact of cognitions on driving errors and safety performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
J-adaptive estimation with estimated noise statistics
NASA Technical Reports Server (NTRS)
Jazwinski, A. H.; Hipkins, C.
1973-01-01
The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.
2013-09-01
which utilizes FTA and then loads it into a DES engine to generate simulation results. .......44 Figure 21. This simulation architecture is...While Discrete Event Simulation ( DES ) can provide accurate time estimation and fast simulation speed, models utilizing it often suffer...C4ISR progress in MDW is developed in this research to demonstrate the feasibility of AEMF- DES and explore its potential. The simulation (MDSIM
Testing the accuracy of a 1-D volcanic plume model in estimating mass eruption rate
Mastin, Larry G.
2014-01-01
During volcanic eruptions, empirical relationships are used to estimate mass eruption rate from plume height. Although simple, such relationships can be inaccurate and can underestimate rates in windy conditions. One-dimensional plume models can incorporate atmospheric conditions and give potentially more accurate estimates. Here I present a 1-D model for plumes in crosswind and simulate 25 historical eruptions where plume height Hobs was well observed and mass eruption rate Mobs could be calculated from mapped deposit mass and observed duration. The simulations considered wind, temperature, and phase changes of water. Atmospheric conditions were obtained from the National Center for Atmospheric Research Reanalysis 2.5° model. Simulations calculate the minimum, maximum, and average values (Mmin, Mmax, and Mavg) that fit the plume height. Eruption rates were also estimated from the empirical formula Mempir = 140Hobs4.14 (Mempir is in kilogram per second, Hobs is in kilometer). For these eruptions, the standard error of the residual in log space is about 0.53 for Mavg and 0.50 for Mempir. Thus, for this data set, the model is slightly less accurate at predicting Mobs than the empirical curve. The inability of this model to improve eruption rate estimates may lie in the limited accuracy of even well-observed plume heights, inaccurate model formulation, or the fact that most eruptions examined were not highly influenced by wind. For the low, wind-blown plume of 14–18 April 2010 at Eyjafjallajökull, where an accurate plume height time series is available, modeled rates do agree better with Mobs than Mempir.
Will molecular dynamics simulations of proteins ever reach equilibrium?
Genheden, Samuel; Ryde, Ulf
2012-06-28
We show that conformational entropies calculated for five proteins and protein-ligand complexes with dihedral-distribution histogramming, the von Mises approach, or quasi-harmonic analysis do not converge to any useful precision even if molecular dynamics (MD) simulations of 380-500 ns length are employed (the uncertainty is 12-89 kJ mol(-1)). To explain this, we suggest a simple protein model involving dihedrals with effective barriers forming a uniform distribution and show that for such a model, the entropy increases logarithmically with time until all significantly populated dihedral states have been sampled, in agreement with the simulations (during the simulations, 52-70% of the available dihedral phase space has been visited). This is also confirmed by the analysis of the trajectories of a 1 ms simulation of bovine pancreatic trypsin inhibitor (31 kJ mol(-1) difference in the entropy between the first and second part of the simulation). Strictly speaking, this means that it is practically impossible to equilibrate MD simulations of proteins. We discuss the implications of such a lack of strict equilibration of protein MD simulations and show that ligand-binding free energies estimated with the MM/GBSA method (molecular mechanics with generalised Born and surface-area solvation) vary by 3-15 kJ mol(-1) during a 500 ns simulation (the higher estimate is caused by rare conformational changes), although they involve a questionable but well-converged normal-mode entropy estimate, whereas free energies estimated by free-energy perturbation vary by less than 0.6 kJ mol(-1) for the same simulation.
Estimating Uncertainty in N2O Emissions from US Cropland Soils
USDA-ARS?s Scientific Manuscript database
A Monte Carlo analysis was combined with an empirically-based approach to quantify uncertainties in soil N2O emissions from US croplands estimated with the DAYCENT simulation model. Only a subset of croplands was simulated in the Monte Carlo analysis which was used to infer uncertainties across the ...
NASA Astrophysics Data System (ADS)
Zhu, Q.; Xu, Y. P.; Hsu, K. L.
2017-12-01
A new satellite-based precipitation dataset, Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) with long-term time series dating back to 1983 can be one valuable dataset for climate studies. This study investigates the feasibility of using PERSIANN-CDR as a reference dataset for climate studies. Sixteen CMIP5 models are evaluated over the Xiang River basin, southern China, by comparing their performance on precipitation projection and streamflow simulation, particularly on extreme precipitation and streamflow events. The results show PERSIANN-CDR is a valuable dataset for climate studies, even on extreme precipitation events. The precipitation estimates and their extreme events from CMIP5 models are improved significantly compared with rain gauge observations after bias-correction by the PERSIANN-CDR precipitation estimates. Given streamflows simulated with raw and bias-corrected precipitation estimates from 16 CMIP5 models, 10 out of 16 are improved after bias-correction. The impact of bias-correction on extreme events for streamflow simulations are unstable, with eight out of 16 models can be clearly claimed they are improved after the bias-correction. Concerning the performance of raw CMIP5 models on precipitation, IPSL-CM5A-MR excels the other CMIP5 models, while MRI-CGCM3 outperforms on extreme events with its better performance on six extreme precipitation metrics. Case studies also show that raw CCSM4, CESM1-CAM5, and MRI-CGCM3 outperform other models on streamflow simulation, while MIROC5-ESM-CHEM, MIROC5-ESM and IPSL-CM5A-MR behaves better than the other models after bias-correction.
An observationally constrained estimate of global dust aerosol optical depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, David A.; Heald, Colette L.; Kok, Jasper F.
Here, the role of mineral dust in climate and ecosystems has been largely quantified using global climate and chemistry model simulations of dust emission, transport, and deposition. However, differences between these model simulations are substantial, with estimates of global dust aerosol optical depth (AOD) that vary by over a factor of 5. Here we develop an observationally based estimate of the global dust AOD, using multiple satellite platforms, in situ AOD observations and four state-of-the-science global models over 2004–2008. We estimate that the global dust AOD at 550 nm is 0.030 ± 0.005 (1σ), higher than the AeroCom model medianmore » (0.023) and substantially narrowing the uncertainty. The methodology used provides regional, seasonal dust AOD and the associated statistical uncertainty for key dust regions around the globe with which model dust schemes can be evaluated. Exploring the regional and seasonal differences in dust AOD between our observationally based estimate and the four models in this study, we find that emissions in Africa are often overrepresented at the expense of Asian and Middle Eastern emissions and that dust removal appears to be too rapid in most models.« less
An observationally constrained estimate of global dust aerosol optical depth
Ridley, David A.; Heald, Colette L.; Kok, Jasper F.; ...
2016-12-06
Here, the role of mineral dust in climate and ecosystems has been largely quantified using global climate and chemistry model simulations of dust emission, transport, and deposition. However, differences between these model simulations are substantial, with estimates of global dust aerosol optical depth (AOD) that vary by over a factor of 5. Here we develop an observationally based estimate of the global dust AOD, using multiple satellite platforms, in situ AOD observations and four state-of-the-science global models over 2004–2008. We estimate that the global dust AOD at 550 nm is 0.030 ± 0.005 (1σ), higher than the AeroCom model medianmore » (0.023) and substantially narrowing the uncertainty. The methodology used provides regional, seasonal dust AOD and the associated statistical uncertainty for key dust regions around the globe with which model dust schemes can be evaluated. Exploring the regional and seasonal differences in dust AOD between our observationally based estimate and the four models in this study, we find that emissions in Africa are often overrepresented at the expense of Asian and Middle Eastern emissions and that dust removal appears to be too rapid in most models.« less
Assessment of ecologic regression in the study of lung cancer and indoor radon.
Stidley, C A; Samet, J M
1994-02-01
Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.
van der Meer, Aize Franciscus; Touw, Daniël J; Marcus, Marco A E; Neef, Cornelis; Proost, Johannes H
2012-10-01
Observational data sets can be used for population pharmacokinetic (PK) modeling. However, these data sets are generally less precisely recorded than experimental data sets. This article aims to investigate the influence of erroneous records on population PK modeling and individual maximum a posteriori Bayesian (MAPB) estimation. A total of 1123 patient records of neonates who were administered vancomycin were used for population PK modeling by iterative 2-stage Bayesian (ITSB) analysis. Cut-off values for weighted residuals were tested for exclusion of records from the analysis. A simulation study was performed to assess the influence of erroneous records on population modeling and individual MAPB estimation. Also the cut-off values for weighted residuals were tested in the simulation study. Errors in registration have limited the influence on outcomes of population PK modeling but can have detrimental effects on individual MAPB estimation. A population PK model created from a data set with many registration errors has little influence on subsequent MAPB estimates for precisely recorded data. A weighted residual value of 2 for concentration measurements has good discriminative power for identification of erroneous records. ITSB analysis and its individual estimates are hardly affected by most registration errors. Large registration errors can be detected by weighted residuals of concentration.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Vigan, Marie; Stirnemann, Jérôme; Mentré, France
2014-05-01
Analysis of repeated time-to-event data is increasingly performed in pharmacometrics using parametric frailty models. The aims of this simulation study were (1) to assess estimation performance of Stochastic Approximation Expectation Maximization (SAEM) algorithm in MONOLIX, Adaptive Gaussian Quadrature (AGQ), and Laplace algorithm in PROC NLMIXED of SAS and (2) to evaluate properties of test of a dichotomous covariate on occurrence of events. The simulation setting is inspired from an analysis of occurrence of bone events after the initiation of treatment by imiglucerase in patients with Gaucher Disease (GD). We simulated repeated events with an exponential model and various dropout rates: no, low, or high. Several values of baseline hazard model, variability, number of subject, and effect of covariate were studied. For each scenario, 100 datasets were simulated for estimation performance and 500 for test performance. We evaluated estimation performance through relative bias and relative root mean square error (RRMSE). We studied properties of Wald and likelihood ratio test (LRT). We used these methods to analyze occurrence of bone events in patients with GD after starting an enzyme replacement therapy. SAEM with three chains and AGQ algorithms provided good estimates of parameters much better than SAEM with one chain and Laplace which often provided poor estimates. Despite a small number of repeated events, SAEM with three chains and AGQ gave small biases and RRMSE. Type I errors were closed to 5%, and power varied as expected for SAEM with three chains and AGQ. Probability of having at least one event under treatment was 19.1%.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
Processes influencing model-data mismatch in drought-stressed, fire-disturbed eddy flux sites
NASA Astrophysics Data System (ADS)
Mitchell, Stephen; Beven, Keith; Freer, Jim; Law, Beverly
2011-06-01
Semiarid forests are very sensitive to climatic change and among the most difficult ecosystems to accurately model. We tested the performance of the Biome-BGC model against eddy flux data taken from young (years 2004-2008), mature (years 2002-2008), and old-growth (year 2000) ponderosa pine stands at Metolius, Oregon, and subsequently examined several potential causes for model-data mismatch. We used the Generalized Likelihood Uncertainty Estimation methodology, which involved 500,000 model runs for each stand (1,500,000 total). Each simulation was run with randomly generated parameter values from a uniform distribution based on published parameter ranges, resulting in modeled estimates of net ecosystem CO2 exchange (NEE) that were compared to measured eddy flux data. Simulations for the young stand exhibited the highest level of performance, though they overestimated ecosystem C accumulation (-NEE) 99% of the time. Among the simulations for the mature and old-growth stands, 100% and 99% of the simulations underestimated ecosystem C accumulation. One obvious area of model-data mismatch is soil moisture, which was overestimated by the model in the young and old-growth stands yet underestimated in the mature stand. However, modeled estimates of soil water content and associated water deficits did not appear to be the primary cause of model-data mismatch; our analysis indicated that gross primary production can be accurately modeled even if soil moisture content is not. Instead, difficulties in adequately modeling ecosystem respiration, mainly autotrophic respiration, appeared to be the fundamental cause of model-data mismatch.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
Jacobs, Matthieu; Grégoire, Nicolas; Couet, William; Bulitta, Jurgen B.
2016-01-01
Semi-mechanistic pharmacokinetic-pharmacodynamic (PK-PD) modeling is increasingly used for antimicrobial drug development and optimization of dosage regimens, but systematic simulation-estimation studies to distinguish between competing PD models are lacking. This study compared the ability of static and dynamic in vitro infection models to distinguish between models with different resistance mechanisms and support accurate and precise parameter estimation. Monte Carlo simulations (MCS) were performed for models with one susceptible bacterial population without (M1) or with a resting stage (M2), a one population model with adaptive resistance (M5), models with pre-existing susceptible and resistant populations without (M3) or with (M4) inter-conversion, and a model with two pre-existing populations with adaptive resistance (M6). For each model, 200 datasets of the total bacterial population were simulated over 24h using static antibiotic concentrations (256-fold concentration range) or over 48h under dynamic conditions (dosing every 12h; elimination half-life: 1h). Twelve-hundred random datasets (each containing 20 curves for static or four curves for dynamic conditions) were generated by bootstrapping. Each dataset was estimated by all six models via population PD modeling to compare bias and precision. For M1 and M3, most parameter estimates were unbiased (<10%) and had good imprecision (<30%). However, parameters for adaptive resistance and inter-conversion for M2, M4, M5 and M6 had poor bias and large imprecision under static and dynamic conditions. For datasets that only contained viable counts of the total population, common statistical criteria and diagnostic plots did not support sound identification of the true resistance mechanism. Therefore, it seems advisable to quantify resistant bacteria and characterize their MICs and resistance mechanisms to support extended simulations and translate from in vitro experiments to animal infection models and ultimately patients. PMID:26967893
USDA-ARS?s Scientific Manuscript database
Various computer models, ranging from simple to complex, have been developed to simulate hydrology and water quality from field to watershed scales. However, many users are uncertain about which model to choose when estimating water quantity and quality conditions in a watershed. This study compared...
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal. PMID:26042002
The Detection and Attribution Model Intercomparison Project (DAMIP v1.0)contribution to CMIP6
Gillett, Nathan P.; Shiogama, Hideo; Funke, Bernd; ...
2016-10-18
Detection and attribution (D&A) simulations were important components of CMIP5 and underpinned the climate change detection and attribution assessments of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. The primary goals of the Detection and Attribution Model Intercomparison Project (DAMIP) are to facilitate improved estimation of the contributions of anthropogenic and natural forcing changes to observed global warming as well as to observed global and regional changes in other climate variables; to contribute to the estimation of how historical emissions have altered and are altering contemporary climate risk; and to facilitate improved observationally constrained projections of futuremore » climate change. D&A studies typically require unforced control simulations and historical simulations including all major anthropogenic and natural forcings. Such simulations will be carried out as part of the DECK and the CMIP6 historical simulation. In addition D&A studies require simulations covering the historical period driven by individual forcings or subsets of forcings only: such simulations are proposed here. Key novel features of the experimental design presented here include firstly new historical simulations with aerosols-only, stratospheric-ozone-only, CO2-only, solar-only, and volcanic-only forcing, facilitating an improved estimation of the climate response to individual forcing, secondly future single forcing experiments, allowing observationally constrained projections of future climate change, and thirdly an experimental design which allows models with and without coupled atmospheric chemistry to be compared on an equal footing.« less
The Detection and Attribution Model Intercomparison Project (DAMIP v1.0) contribution to CMIP6
NASA Astrophysics Data System (ADS)
Gillett, Nathan P.; Shiogama, Hideo; Funke, Bernd; Hegerl, Gabriele; Knutti, Reto; Matthes, Katja; Santer, Benjamin D.; Stone, Daithi; Tebaldi, Claudia
2016-10-01
Detection and attribution (D&A) simulations were important components of CMIP5 and underpinned the climate change detection and attribution assessments of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. The primary goals of the Detection and Attribution Model Intercomparison Project (DAMIP) are to facilitate improved estimation of the contributions of anthropogenic and natural forcing changes to observed global warming as well as to observed global and regional changes in other climate variables; to contribute to the estimation of how historical emissions have altered and are altering contemporary climate risk; and to facilitate improved observationally constrained projections of future climate change. D&A studies typically require unforced control simulations and historical simulations including all major anthropogenic and natural forcings. Such simulations will be carried out as part of the DECK and the CMIP6 historical simulation. In addition D&A studies require simulations covering the historical period driven by individual forcings or subsets of forcings only: such simulations are proposed here. Key novel features of the experimental design presented here include firstly new historical simulations with aerosols-only, stratospheric-ozone-only, CO2-only, solar-only, and volcanic-only forcing, facilitating an improved estimation of the climate response to individual forcing, secondly future single forcing experiments, allowing observationally constrained projections of future climate change, and thirdly an experimental design which allows models with and without coupled atmospheric chemistry to be compared on an equal footing.
Population and Activity of On-road Vehicles in MOVES2014 ...
This report describes the sources and derivation for on-road vehicle population and activity information and associated adjustments as stored in the MOVES2014 default databases. Motor Vehicle Emission Simulator, the MOVES2014 model, is a set of modeling tools for estimating emissions produced by on-road (cars, trucks, motorcycles, etc.) and nonroad (backhoes, lawnmowers, etc.) mobile sources. The national default activity information in MOVES2014 provides a reasonable basis for estimating national emissions. However, the uncertainties and variability in the default data contribute to the uncertainty in the resulting emission estimates. Properly characterizing emissions from the on-road vehicle subset requires a detailed understanding of the cars and trucks that make up the vehicle fleet and their patterns of operation. The MOVES model calculates emission inventories by multiplying emission rates by the appropriate emission-related activity, applying correction (adjustment) factors as needed to simulate specific situations, and then adding up the emissions from all sources (populations) and regions. This report describes the sources and derivation for on-road vehicle population and activity information and associated adjustments as stored in the MOVES2014 default databases. Motor Vehicle Emission Simulator, the MOVES2014 model, is a set of modeling tools for estimating emissions produced by on-road (cars, trucks, motorcycles, etc.) and nonroad (backhoes, law
On the Scaling Laws and Similarity Spectra for Jet Noise in Subsonic and Supersonic Flow
NASA Technical Reports Server (NTRS)
Kandula, Max
2008-01-01
The scaling laws for the simulation of noise from subsonic and ideally expanded supersonic jets are reviewed with regard to their applicability to deduce full-scale conditions from small-scale model testing. Important parameters of scale model testing for the simulation of jet noise are identified, and the methods of estimating full- scale noise levels from simulated scale model data are addressed. The limitations of cold-jet data in estimating high-temperature supersonic jet noise levels are discussed. New results are presented showing the dependence of overall sound power level on the jet temperature ratio at various jet Mach numbers. A generalized similarity spectrum is also proposed, which accounts for convective Mach number and angle to the jet axis.
Tanaka, Yoshihisa; Nakamura, Shinichiro; Kuriyama, Shinichi; Ito, Hiromu; Furu, Moritoshi; Komistek, Richard D; Matsuda, Shuichi
2016-11-01
It is unknown whether a computer simulation with simple models can estimate individual in vivo knee kinematics, although some complex models have predicted the knee kinematics. The purposes of this study are first, to validate the accuracy of the computer simulation with our developed model during a squatting activity in a weight-bearing deep knee bend and then, to analyze the contact area and the contact stress of the tri-condylar implants for individual patients. We compared the anteroposterior (AP) contact positions of medial and lateral condyles calculated by the computer simulation program with the positions measured from the fluoroscopic analysis for three implanted knees. Then the contact area and the stress including the third condyle were calculated individually using finite element (FE) analysis. The motion patterns were similar in the simulation program and the fluoroscopic surveillance. Our developed model could nearly estimate the individual in vivo knee kinematics. The mean and maximum differences of the AP contact positions were 1.0mm and 2.5mm, respectively. At 120° of knee flexion, the contact area at the third condyle was wider than the both condyles. The mean maximum contact stress at the third condyle was lower than the both condyles at 90° and 120° of knee flexion. Individual bone models are required to estimate in vivo knee kinematics in our simple model. The tri-condylar implant seems to be safe for deep flexion activities due to the wide contact area and low contact stress. Copyright © 2016 Elsevier Ltd. All rights reserved.
Simulated maximum likelihood method for estimating kinetic rates in gene expression.
Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin
2007-01-01
Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Variability in Temperature-Related Mortality Projections under Climate Change
Benmarhnia, Tarik; Sottile, Marie-France; Plante, Céline; Brand, Allan; Casati, Barbara; Fournier, Michel
2014-01-01
Background: Most studies that have assessed impacts on mortality of future temperature increases have relied on a small number of simulations and have not addressed the variability and sources of uncertainty in their mortality projections. Objectives: We assessed the variability of temperature projections and dependent future mortality distributions, using a large panel of temperature simulations based on different climate models and emission scenarios. Methods: We used historical data from 1990 through 2007 for Montreal, Quebec, Canada, and Poisson regression models to estimate relative risks (RR) for daily nonaccidental mortality in association with three different daily temperature metrics (mean, minimum, and maximum temperature) during June through August. To estimate future numbers of deaths attributable to ambient temperatures and the uncertainty of the estimates, we used 32 different simulations of daily temperatures for June–August 2020–2037 derived from three global climate models (GCMs) and a Canadian regional climate model with three sets of RRs (one based on the observed historical data, and two on bootstrap samples that generated the 95% CI of the attributable number (AN) of deaths). We then used analysis of covariance to evaluate the influence of the simulation, the projected year, and the sets of RRs used to derive the attributable numbers of deaths. Results: We found that < 1% of the variability in the distributions of simulated temperature for June–August of 2020–2037 was explained by differences among the simulations. Estimated ANs for 2020–2037 ranged from 34 to 174 per summer (i.e., June–August). Most of the variability in mortality projections (38%) was related to the temperature–mortality RR used to estimate the ANs. Conclusions: The choice of the RR estimate for the association between temperature and mortality may be important to reduce uncertainty in mortality projections. Citation: Benmarhnia T, Sottile MF, Plante C, Brand A, Casati B, Fournier M, Smargiassi A. 2014. Variability in temperature-related mortality projections under climate change. Environ Health Perspect 122:1293–1298; http://dx.doi.org/10.1289/ehp.1306954 PMID:25036003
Peterson, Steven M.; Flynn, Amanda T.; Traylor, Jonathan P.
2016-12-13
The High Plains aquifer is a nationally important water resource underlying about 175,000 square miles in parts of eight states: Colorado, Kansas, Oklahoma, Nebraska, New Mexico, South Dakota, Texas, and Wyoming. Droughts across much of the Northern High Plains from 2001 to 2007 have combined with recent (2004) legislative mandates to elevate concerns regarding future availability of groundwater and the need for additional information to support science-based water-resource management. To address these needs, the U.S. Geological Survey began the High Plains Groundwater Availability Study to provide a tool for water-resource managers and other stakeholders to assess the status and availability of groundwater resources.A transient groundwater-flow model was constructed using the U.S. Geological Survey modular three-dimensional finite-difference groundwater-flow model with Newton-Rhapson solver (MODFLOW–NWT). The model uses an orthogonal grid of 565 rows and 795 columns, and each grid cell measures 3,281 feet per side, with one variably thick vertical layer, simulated as unconfined. Groundwater flow was simulated for two distinct periods: (1) the period before substantial groundwater withdrawals, or before about 1940, and (2) the period of increasing groundwater withdrawals from May 1940 through April 2009. A soil-water-balance model was used to estimate recharge from precipitation and groundwater withdrawals for irrigation. The soil-water-balance model uses spatially distributed soil and landscape properties with daily weather data and estimated historical land-cover maps to calculate spatial and temporal variations in potential recharge. Mean annual recharge estimated for 1940–49, early in the history of groundwater development, and 2000–2009, late in the history of groundwater development, was 3.3 and 3.5 inches per year, respectively.Primary model calibration was completed using statistical techniques through parameter estimation using the parameter estimation suite of software with Tikhonov regularization. Calibration targets for the groundwater model included 343,067 groundwater levels measured in wells and 10,820 estimated monthly stream base flows at streamgages. A total of 1,312 parameters were adjusted during calibration to improve the match between calibration targets and simulated equivalents. Comparison of calibration targets to simulated equivalents indicated that, at the regional scale, the model correctly reproduced groundwater levels and stream base flows for 1940–2009. This comparison indicates that the model can be used to examine the likely response of the aquifer system to potential future stresses.Mean calibrated recharge for 1940–49 and 2000–2009 was smaller than that estimated with the soil-water-balance model. This indicated that although the general spatial patterns of recharge estimated with the soil-water-balance model were approximately correct at the regional scale of the Northern High Plains aquifer, the soil-water-balance model had overestimated recharge, and adjustments were needed to decrease recharge to improve the match of the groundwater model to calibration targets. The largest components of the simulated groundwater budgets were recharge from precipitation, recharge from canal seepage, outflows to evapotranspiration, and outflows to stream base flow. Simulated outflows to irrigation wells increased from 7 percent of total outflows in 1940–49 to 38 percent of 1970–79 total outflows and 49 percent of 2000–2009 total outflows.
Testing the sensitivity of terrestrial carbon models using remotely sensed biomass estimates
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Saatchi, S. S.; Meyer, V.; Milesi, C.; Wang, W.; Ganguly, S.; Zhang, G.; Nemani, R. R.
2010-12-01
There is a large uncertainty in carbon allocation and biomass accumulation in forest ecosystems. With the recent availability of remotely sensed biomass estimates, we now can test some of the hypotheses commonly implemented in various ecosystem models. We used biomass estimates derived by integrating MODIS, GLAS and PALSAR data to verify above-ground biomass estimates simulated by a number of ecosystem models (CASA, BIOME-BGC, BEAMS, LPJ). This study extends the hierarchical framework (Wang et al., 2010) for diagnosing ecosystem models by incorporating independent estimates of biomass for testing and calibrating respiration, carbon allocation, turn-over algorithms or parameters.
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime
2018-02-01
Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.
Estimating Classifier Accuracy Using Noisy Expert Labels
estimators to real -world problems is limited. We applythe estimators to labels simulated from three models of the expert labeling process and also four real ...thatconditional dependence between experts negatively impacts estimator performance. On two of the real datasets, the estimatorsclearly outperformed the
Model-based estimation for dynamic cardiac studies using ECT.
Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O
1994-01-01
The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.
Andrews, William J.; Becker, Carol J.; Ryter, Derek W.; Smith, S. Jerrod
2016-01-19
Numerical groundwater-flow models were created to characterize flow systems in aquifers underlying this study area and areas of particular interest within the study area. Those models were used to estimate sustainable groundwater yields from parts of the North Canadian River alluvial aquifer, characterize groundwater/surface-water interactions, and estimate the effects of a 10-year simulated drought on streamflows and water levels in alluvial and bedrock aquifers. Pumping of wells at the Iron Horse Industrial Park was estimated to cause negligible infiltration of water from the adjoining North Canadian River. A 10-year simulated drought of 50 percent of normal recharge was tested for the period 1990–2000. For this period, the total amount of groundwater in storage was estimated to decrease by 8.6 percent in the North Canadian River alluvial aquifer and approximately 0.2 percent in the Central Oklahoma aquifer, and groundwater flow to streams was estimated to decrease by 28–37 percent. This volume of groundwater loss showed that the Central Oklahoma aquifer is a bedrock aquifer that has relatively low rates of recharge from the land surface. The simulated drought decreased simulated streamflow, composed of base flow, in the North Canadian River at Shawnee, Okla., which did not recover to predrought conditions until the relatively wet year of 2007 after the simulated drought period.
Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.
2017-12-01
The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.
Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C
2018-06-06
Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.
Simulated E-Bomb Effects on Electronically Equipped Targets
2009-09-01
coupling model program (CEMPAT), pursuing a feasible geometry of attack, practical antennas, best coupling approximations of ground conductivity and...procedure to determine these possible effects is to estimate the electromagnetic coupling from first principles and simulations using a coupling model ...Applications .................................... 16 B. SYSTEM OF INTEREST MODEL AS A TARGET ............................. 16 1. Shielding Methods, as
USDA-ARS?s Scientific Manuscript database
The NTT (Nutrient Tracking Tool) was designed to provide an opportunity for all users, including producers, to simulate the complex models, such as APEX (Agricultural Policy Environmental eXtender) and associated required databases. The APEX model currently nested within NTT provides estimates of th...
USDA-ARS?s Scientific Manuscript database
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible un...
Myokit: A simple interface to cardiac cellular electrophysiology.
Clerx, Michael; Collins, Pieter; de Lange, Enno; Volders, Paul G A
2016-01-01
Myokit is a new powerful and versatile software tool for modeling and simulation of cardiac cellular electrophysiology. Myokit consists of an easy-to-read modeling language, a graphical user interface, single and multi-cell simulation engines and a library of advanced analysis tools accessible through a Python interface. Models can be loaded from Myokit's native file format or imported from CellML. Model export is provided to C, MATLAB, CellML, CUDA and OpenCL. Patch-clamp data can be imported and used to estimate model parameters. In this paper, we review existing tools to simulate the cardiac cellular action potential to find that current tools do not cater specifically to model development and that there is a gap between easy-to-use but limited software and powerful tools that require strong programming skills from their users. We then describe Myokit's capabilities, focusing on its model description language, simulation engines and import/export facilities in detail. Using three examples, we show how Myokit can be used for clinically relevant investigations, multi-model testing and parameter estimation in Markov models, all with minimal programming effort from the user. This way, Myokit bridges a gap between performance, versatility and user-friendliness. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Method for Modeling Household Occupant Behavior to Simulate Residential Energy Consumption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brandon J; Starke, Michael R; Abdelaziz, Omar
2014-01-01
This paper presents a statistical method for modeling the behavior of household occupants to estimate residential energy consumption. Using data gathered by the U.S. Census Bureau in the American Time Use Survey (ATUS), actions carried out by survey respondents are categorized into ten distinct activities. These activities are defined to correspond to the major energy consuming loads commonly found within the residential sector. Next, time varying minute resolution Markov chain based statistical models of different occupant types are developed. Using these behavioral models, individual occupants are simulated to show how an occupant interacts with the major residential energy consuming loadsmore » throughout the day. From these simulations, the minimum number of occupants, and consequently the minimum number of multiple occupant households, needing to be simulated to produce a statistically accurate representation of aggregate residential behavior can be determined. Finally, future work will involve the use of these occupant models along side residential load models to produce a high-resolution energy consumption profile and estimate the potential for demand response from residential loads.« less
Estimating hydrologic budgets for six Persian Gulf watersheds, Iran
NASA Astrophysics Data System (ADS)
Hosseini, Majid; Ghafouri, Mohammad; Tabatabaei, MahmoudReza; Goodarzi, Masoud; Mokarian, Zeinab
2017-10-01
Estimation of the major components of the hydrologic budget is important for determining the impacts on the water supply and quality of either planned or proposed land management projects, vegetative changes, groundwater withdrawals, and reservoir management practices and plans. As acquisition of field data is costly and time consuming, models have been created to test various land use practices and their concomitant effects on the hydrologic budget of watersheds. To simulate such management scenarios realistically, a model should be able to simulate the individual components of the hydrologic budget. The main objective of this study is to perform the SWAT2012 model for estimation of hydrological budget in six subbasin of Persian Gulf watershed; Golgol, Baghan, Marghab Shekastian, Tangebirim and Daragah, which are located in south and south west of Iran during 1991-2009. In order to evaluate the performance of the model, hydrological data, soil map, land use map and digital elevation model (DEM) are obtained and prepared for each catchment to run the model. SWAT-CUP with SUFI2 program was used for simulation, uncertainty and validation with 95 Percent Prediction Uncertainty. Coefficient of determination ( R 2) and Nash-Sutcliffe coefficient (NS) were used for evaluation of the model simulation results. Comparison of measured and predicted values demonstrated that each component of the model gave reasonable output and that the interaction among components was realistic. The study has produced a technique with reliable capability for annual and monthly water budget components in Persian Gulf watershed.
NASA Astrophysics Data System (ADS)
Song, Lanlan
2017-04-01
Nitrous oxide is much more potent greenhouse gas than carbon dioxide. However, the estimation of N2O flux is usually clouded with uncertainty, mainly due to high spatial and temporal variations. This hampers the development of general mechanistic models for N2O emission as well, as most previously developed models were empirical or exhibited low predictability with numerous assumptions. In this study, we tested General Regression Neural Networks (GRNN) as an alternative to classic empirical models for simulating N2O emission in riparian zones of Reservoirs. GRNN and nonlinear regression (NLR) were applied to estimate the N2O flux of 1-year observations in riparian zones of Three Gorge Reservoir. NLR resulted in lower prediction power and higher residuals compared to GRNN. Although nonlinear regression model estimated similar average values of N2O, it could not capture the fluctuation patterns accurately. In contrast, GRNN model achieved a fairly high predictability, with an R2 of 0.59 for model validation, 0.77 for model calibration (training), and a low root mean square error (RMSE), indicating a high capacity to simulate the dynamics of N2O flux. According to a sensitivity analysis of the GRNN, nonlinear relationships between input variables and N2O flux were well explained. Our results suggest that the GRNN developed in this study has a greater performance in simulating variations in N2O flux than nonlinear regressions.
Nonparametric estimation and testing of fixed effects panel data models
Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi
2009-01-01
In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335
The Future of Drought in the Southeastern U.S.: Projections from downscaled CMIP5 models
NASA Astrophysics Data System (ADS)
Keellings, D.; Engstrom, J.
2017-12-01
The Southeastern U.S. has been repeatedly impacted by severe droughts that have affected the environment and economy of the region. In this study the ability of 32 downscaled CMIP5 models, bias corrected using localized constructed analogs (LOCA), to simulate historical observations of dry spells from 1950-2005 are assessed using Perkins skill scores and significance tests. The models generally simulate the distribution of dry days well but there are significant differences between the ability of the best and worst performing models, particularly when it comes to the upper tail of the distribution. The best and worst performing models are then projected through 2099, using RCP 4.5 and 8.5, and estimates of 20 year return periods are compared. Only the higher skill models provide a good estimate of extreme dry spell lengths with simulations of 20 year return values within ± 5 days of observed values across the region. Projected return values differ by model grouping, but all models exhibit significant increases.
Transient modeling in simulation of hospital operations for emergency response.
Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li
2006-01-01
Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.
The US EPA National Exposure Research Laboratory (NERL) is currently refining and evaluating a population exposure model for particulate matter (PM), called the Stochastic Human Exposure and Dose Simulation (SHEDS-PM) model. The SHEDS-PM model estimates the population distribu...
NASA Astrophysics Data System (ADS)
Maksyutov, Shamil; Takagi, Hiroshi; Belikov, Dmitry A.; Saeki, Tazu; Zhuravlev, Ruslan; Ganshin, Alexander; Lukyanov, Alexander; Yoshida, Yukio; Oshchepkov, Sergey; Bril, Andrey; Saito, Makoto; Oda, Tomohiro; Valsala, Vinu K.; Saito, Ryu; Andres, Robert J.; Conway, Thomas; Tans, Pieter; Yokota, Tatsuya
2012-11-01
Inverse estimation of surface C02 fluxes is performed with atmospheric transport model using ground-based and GOSAT observations. The NIES-retrieved C02 column mixing (Xc02) and column averaging kernel are provided by GOSAT Level 2 product v. 2.0 and PPDF-DOAS method. Monthly mean C02 fluxes for 64 regions are estimated together with a global mean offset between GOSAT data and ground-based data. We used the fixed-lag Kalman filter to infer monthly fluxes for 42 sub-continental terrestrial regions and 22 oceanic basins. We estimate fluxes and compare results obtained by two inverse modeling approaches. In basic approach adopted in GOSAT Level4 product v. 2.01, we use aggregation of the GOSAT observations into monthly mean over 5x5 degree grids, fluxes are estimated independently for each region, and NIES atmospheric transport model is used for forward simulation. In the alternative method, the model-observation misfit is estimated for each observation separately and fluxes are spatially correlated using EOF analysis of the simulated flux variability similar to geostatistical approach, while transport simulation is enhanced by coupling with a Lagrangian transport model Flexpart. Both methods use using the same set of prior fluxes and region maps. Daily net ecosystem exchange (NEE) is predicted by the Vegetation Integrative Simulator for Trace gases (VISIT) optimized to match seasonal cycle of the atmospheric C02 . Monthly ocean-atmosphere C02 fluxes are produced with an ocean pC02 data assimilation system. Biomass burning fluxes were provided by the Global Fire Emissions Database (GFED); and monthly fossil fuel C02 emissions are estimated with ODIAC inventory. The results of analyzing one year of the GOSAT data suggest that when both GOSAT and ground-based data are used together, fluxes in tropical and other remote regions with lower associated uncertainties are obtained than in the analysis using only ground-based data. With version 2.0 of L2 Xc02 the fluxes appear reasonable for many regions and seasons, however there is a need for improving the L2 bias correction, data filtering and the inverse modeling method to reduce estimated flux anomalies visible in some areas. We also observe that application of spatial flux correlations with EOF based approach reduces flux anomalies.
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
ERIC Educational Resources Information Center
Wang, Wen-Chung
2004-01-01
The Pearson correlation is used to depict effect sizes in the context of item response theory. Amultidimensional Rasch model is used to directly estimate the correlation between latent traits. Monte Carlo simulations were conducted to investigate whether the population correlation could be accurately estimated and whether the bootstrap method…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perigaud C.; Dewitte, B.
The Zebiak and Cane model is used in its {open_quotes}uncoupled mode,{close_quotes} meaning that the oceanic model component is driven by the Florida State University (FSU) wind stress anomalies over 1980-93 to simulate sea surface temperature anomalies, and these are used in the atmospheric model component to generate wind anomalies. Simulations are compared with data derived from FSU winds, International Satellite Cloud Climatology Project cloud convection, Advanced Very High Resolution Radiometer SST, Geosat sea level, 20{degrees}C isotherm depth derived from an expendable bathythermograph, and current velocities estimated from drifters or current-meter moorings. Forced by the simulated SST, the atmospheric model ismore » fairly successful in reproducing the observed westerlies during El Nino events. The model fails to simulate the easterlies during La Nina 1988. The simulated forcing of the atmosphere is in very poor agreement with the heating derived from cloud convection data. Similarly, the model is fairly successful in reproducing the warm anomalies during El Nino events. However, it fails to simulate the observed cold anomalies. Simulated variations of thermocline depth agree reasonably well with observations. The model simulates zonal current anomalies that are reversing at a dominant 9-month frequency. Projecting altimetric observations on Kelvin and Rossby waves provides an estimate of zonal current anomalies, which is consistent with the ones derived from drifters or from current meter moorings. Unlike the simulated ones, the observed zonal current anomalies reverse from eastward during El Nino events to westward during La Nina events. The simulated 9-month oscillations correspond to a resonant mode of the basin. They can be suppressed by cancelling the wave reflection at the boundaries, or they can be attenuated by increasing the friction in the ocean model. 58 refs., 14 figs., 6 tabs.« less
van der Heijden, A A W A; Feenstra, T L; Hoogenveen, R T; Niessen, L W; de Bruijne, M C; Dekker, J M; Baan, C A; Nijpels, G
2015-12-01
To test a simulation model, the MICADO model, for estimating the long-term effects of interventions in people with and without diabetes. The MICADO model includes micro- and macrovascular diseases in relation to their risk factors. The strengths of this model are its population scope and the possibility to assess parameter uncertainty using probabilistic sensitivity analyses. Outcomes include incidence and prevalence of complications, quality of life, costs and cost-effectiveness. We externally validated MICADO's estimates of micro- and macrovascular complications in a Dutch cohort with diabetes (n = 498,400) by comparing these estimates with national and international empirical data. For the annual number of people undergoing amputations, MICADO's estimate was 592 (95% interquantile range 291-842), which compared well with the registered number of people with diabetes-related amputations in the Netherlands (728). The incidence of end-stage renal disease estimated using the MICADO model was 247 people (95% interquartile range 120-363), which was also similar to the registered incidence in the Netherlands (277 people). MICADO performed well in the validation of macrovascular outcomes of population-based cohorts, while it had more difficulty in reflecting a highly selected trial population. Validation by comparison with independent empirical data showed that the MICADO model simulates the natural course of diabetes and its micro- and macrovascular complications well. As a population-based model, MICADO can be applied for projections as well as scenario analyses to evaluate the long-term (cost-)effectiveness of population-level interventions targeting diabetes and its complications in the Netherlands or similar countries. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.
NASA Astrophysics Data System (ADS)
Ranatunga, T.
2016-12-01
Modeling of fate and transport of fecal bacteria in a watershed is generally a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria (E.coli) source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads were input to the SWAT model in order to simulate the transport through the land and in-stream conditions. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on H-GAC's regional land use, population and household projections (up to 2040). Based on the in-stream reductions required to meet the water quality standards, the corresponding required source load reductions were estimated.
NASA Astrophysics Data System (ADS)
Ranatunga, T.
2017-12-01
Modeling of fate and transport of fecal bacteria in a watershed is a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads from SELECT model were input to the SWAT model, and simulate the bacteria transport through the land and in-stream. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on regional land use, population and household forecast (up to 2040). Based on the reductions required to meet the water quality standards in-stream, the corresponding required source load reductions were estimated.
A "total parameter estimation" method in the varification of distributed hydrological models
NASA Astrophysics Data System (ADS)
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.
Mean-line Modeling of an Axial Turbine
NASA Astrophysics Data System (ADS)
Tkachenko, A. Yu; Ostapyuk, Ya A.; Filinov, E. P.
2018-01-01
The article describes the approach for axial turbine modeling along the mean line. It bases on the developed model of an axial turbine blade row. This model is suitable for both nozzle vanes and rotor blades simulations. Consequently, it allows the simulation of the single axial turbine stage as well as a multistage turbine. The turbine stage model can take into account the cooling air flow before and after a throat of each blade row, outlet straightener vanes existence and stagger angle controlling of nozzle vanes. The axial turbine estimation method includes the loss estimation and thermogasdynamic analysis. The single stage axial turbine was calculated with the developed model. The obtained results deviation was within 3% when comparing with the results of CFD modeling.
2013-09-01
model and the BRDF in the SRP model are not consistent with each other, then the resulting estimated albedo-areas and mass are inaccurate and biased...This work studies the use of physically consistent BRDF -SRP models for mass estimation. Simulation studies are used to provide an indication of the...benefits of using these new models . An unscented Kalman filter approach that includes BRDF and mass parameters in the state vector is used. The
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-01-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie’s law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling. PMID:26927886
NASA Astrophysics Data System (ADS)
Montazeri, A.; West, C.; Monk, S. D.; Taylor, C. J.
2017-04-01
This paper concerns the problem of dynamic modelling and parameter estimation for a seven degree of freedom hydraulic manipulator. The laboratory example is a dual-manipulator mobile robotic platform used for research into nuclear decommissioning. In contrast to earlier control model-orientated research using the same machine, the paper develops a nonlinear, mechanistic simulation model that can subsequently be used to investigate physically meaningful disturbances. The second contribution is to optimise the parameters of the new model, i.e. to determine reliable estimates of the physical parameters of a complex robotic arm which are not known in advance. To address the nonlinear and non-convex nature of the problem, the research relies on the multi-objectivisation of an output error single-performance index. The developed algorithm utilises a multi-objective genetic algorithm (GA) in order to find a proper solution. The performance of the model and the GA is evaluated using both simulated (i.e. with a known set of 'true' parameters) and experimental data. Both simulation and experimental results show that multi-objectivisation has improved convergence of the estimated parameters compared to the single-objective output error problem formulation. This is achieved by integrating the validation phase inside the algorithm implicitly and exploiting the inherent structure of the multi-objective GA for this specific system identification problem.
Simon, Steven L; Hoffman, F Owen; Hofer, Eduard
2015-01-01
Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.
The effects of numerical-model complexity and observation type on estimated porosity values
Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-01-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations
Carlos A. Gonzalez-Benecke; Eric J. Jokela; Wendell P. Cropper; Rosvel Bracho; Daniel J. Leduc
2014-01-01
The forest simulation model, 3-PG, has been widely applied as a useful tool for predicting growth of forest species in many countries. The model has the capability to estimate the effects of management, climate and site characteristics on many stand attributes using easily available data. Currently, there is an increasing interest in estimating biomass and assessing...
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
Statistical Methods for Assessments in Simulations and Serious Games. Research Report. ETS RR-14-12
ERIC Educational Resources Information Center
Fu, Jianbin; Zapata, Diego; Mavronikolas, Elia
2014-01-01
Simulation or game-based assessments produce outcome data and process data. In this article, some statistical models that can potentially be used to analyze data from simulation or game-based assessments are introduced. Specifically, cognitive diagnostic models that can be used to estimate latent skills from outcome data so as to scale these…
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes
2017-01-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...
Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H
2014-01-01
A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.
David L. Peterson; Patrick J. Flowers
1984-01-01
A simulation model was developed to estimate postfire changes in the production and value of grazing lands in the Northern Rocky Mountain-Intermountain region. Ecological information and management decisions were used to simulate expected changes in production and value after wildfire in six major rangeland types: permanent forested range (ponderosa pine), transitory...
ERIC Educational Resources Information Center
Keane, Michael P.; Wolpin, Kenneth I.
2002-01-01
Part I uses simulations of a model of welfare participation and women's fertility decisions, showing that increases in per-child payments have substantial impact on fertility. Part II uses estimations of decision rules of forward-looking women regarding welfare participation, fertility, marriage, work, and schooling. (SK)
Estimation of the relative influence of climate change, compared to other human activities, on dynamics of Pacific salmon (Oncorhynchus spp.) populations can help management agencies take appropriate management actions. We used empirically based simulation modelling of 48 sockeye...
Estimating winter wheat phenological parameters: Implications for crop modeling
USDA-ARS?s Scientific Manuscript database
Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacagnina, Carlo; Hasekamp, Otto P.; Bian, Huisheng
2015-09-27
The aerosol Single Scattering Albedo (SSA) over the global oceans is evaluated based on polarimetric measurements by the PARASOL satellite. The retrieved values for SSA and Aerosol Optical Depth (AOD) agree well with the ground-based measurements of the AErosol RObotic NETwork (AERONET). The global coverage provided by the PARASOL observations represents a unique opportunity to evaluate SSA and AOD simulated by atmospheric transport model runs, as performed in the AeroCom framework. The SSA estimate provided by the AeroCom models is generally higher than the SSA retrieved from both PARASOL and AERONET. On the other hand, the mean simulated AOD ismore » about right or slightly underestimated compared with observations. An overestimate of the SSA by the models would suggest that these simulate an overly strong aerosol radiative cooling at top-of-atmosphere (TOA) and underestimate it at surface. This implies that aerosols have a potential stronger impact within the atmosphere than currently simulated.« less
Estimating abundance in the presence of species uncertainty
Chambert, Thierry A.; Hossack, Blake R.; Fishback, LeeAnn; Davenport, Jon M.
2016-01-01
1.N-mixture models have become a popular method for estimating abundance of free-ranging animals that are not marked or identified individually. These models have been used on count data for single species that can be identified with certainty. However, co-occurring species often look similar during one or more life stages, making it difficult to assign species for all recorded captures. This uncertainty creates problems for estimating species-specific abundance and it can often limit life stages to which we can make inference. 2.We present a new extension of N-mixture models that accounts for species uncertainty. In addition to estimating site-specific abundances and detection probabilities, this model allows estimating probability of correct assignment of species identity. We implement this hierarchical model in a Bayesian framework and provide all code for running the model in BUGS-language programs. 3.We present an application of the model on count data from two sympatric freshwater fishes, the brook stickleback (Culaea inconstans) and the ninespine stickleback (Pungitius pungitius), ad illustrate implementation of covariate effects (habitat characteristics). In addition, we used a simulation study to validate the model and illustrate potential sample size issues. We also compared, for both real and simulated data, estimates provided by our model to those obtained by a simple N-mixture model when captures of unknown species identification were discarded. In the latter case, abundance estimates appeared highly biased and very imprecise, while our new model provided unbiased estimates with higher precision. 4.This extension of the N-mixture model should be useful for a wide variety of studies and taxa, as species uncertainty is a common issue. It should notably help improve investigation of abundance and vital rate characteristics of organisms’ early life stages, which are sometimes more difficult to identify than adults.
Pool, D.R.; Dickinson, Jesse
2007-01-01
A numerical ground-water model was developed to simulate seasonal and long-term variations in ground-water flow in the Sierra Vista subwatershed, Arizona, United States, and Sonora, Mexico, portions of the Upper San Pedro Basin. This model includes the simulation of details of the groundwater flow system that were not simulated by previous models, such as ground-water flow in the sedimentary rocks that surround and underlie the alluvial basin deposits, withdrawals for dewatering purposes at the Tombstone mine, discharge to springs in the Huachuca Mountains, thick low-permeability intervals of silt and clay that separate the ground-water flow system into deep-confined and shallow-unconfined systems, ephemeral-channel recharge, and seasonal variations in ground-water discharge by wells and evapotranspiration. Steady-state and transient conditions during 1902-2003 were simulated by using a five-layer numerical ground- water flow model representing multiple hydrogeologic units. Hydraulic properties of model layers, streamflow, and evapotranspiration rates were estimated as part of the calibration process by using observed water levels, vertical hydraulic gradients, streamflow, and estimated evapotranspiration rates as constraints. Simulations approximate observed water-level trends throughout most of the model area and streamflow trends at the Charleston streamflow-gaging station on the San Pedro River. Differences in observed and simulated water levels, streamflow, and evapotranspiration could be reduced through simulation of climate-related variations in recharge rates and recharge from flood-flow infiltration.
Users manual for linear Time-Varying Helicopter Simulation (Program TVHIS)
NASA Technical Reports Server (NTRS)
Burns, M. R.
1979-01-01
A linear time-varying helicopter simulation program (TVHIS) is described. The program is designed as a realistic yet efficient helicopter simulation. It is based on a linear time-varying helicopter model which includes rotor, actuator, and sensor models, as well as a simulation of flight computer logic. The TVHIS can generate a mean trajectory simulation along a nominal trajectory, or propagate covariance of helicopter states, including rigid-body, turbulence, control command, controller states, and rigid-body state estimates.
Pool, D.R.; Blasch, Kyle W.; Callegary, James B.; Leake, Stanley A.; Graser, Leslie F.
2011-01-01
A numerical flow model (MODFLOW) of the groundwater flow system in the primary aquifers in northern Arizona was developed to simulate interactions between the aquifers, perennial streams, and springs for predevelopment and transient conditions during 1910 through 2005. Simulated aquifers include the Redwall-Muav, Coconino, and basin-fill aquifers. Perennial stream reaches and springs that derive base flow from the aquifers were simulated, including the Colorado River, Little Colorado River, Salt River, Verde River, and perennial reaches of tributary streams. Simulated major springs include Blue Spring, Del Rio Springs, Havasu Springs, Verde River headwater springs, several springs that discharge adjacent to major Verde River tributaries, and many springs that discharge to the Colorado River. Estimates of aquifer hydraulic properties and groundwater budgets were developed from published reports and groundwater-flow models. Spatial extents of aquifers and confining units were developed from geologic data, geophysical models, a groundwater-flow model for the Prescott Active Management Area, drill logs, geologic logs, and geophysical logs. Spatial and temporal distributions of natural recharge were developed by using a water-balance model that estimates recharge from direct infiltration. Additional natural recharge from ephemeral channel infiltration was simulated in alluvial basins. Recharge at wastewater treatment facilities and incidental recharge at agricultural fields and golf courses were also simulated. Estimates of predevelopment rates of groundwater discharge to streams, springs, and evapotranspiration by phreatophytes were derived from previous reports and on the basis of streamflow records at gages. Annual estimates of groundwater withdrawals for agriculture, municipal, industrial, and domestic uses were developed from several sources, including reported withdrawals for nonexempt wells, estimated crop requirements for agricultural wells, and estimated per capita water use for exempt wells. Accuracy of the simulated groundwater-flow system was evaluated by using observational control from water levels in wells, estimates of base flow from streamflow records, and estimates of spring discharge. Major results from the simulations include the importance of variations in recharge rates throughout the study area and recharge along ephemeral and losing stream reaches in alluvial basins. Insights about the groundwater-flow systems in individual basins include the hydrologic influence of geologic structures in some areas and that stream-aquifer interactions along the lower part of the Little Colorado River are an effective control on water level distributions throughout the Little Colorado River Plateau basin. Better information on several aspects of the groundwater flow system are needed to reduce uncertainty of the simulated system. Many areas lack documentation of the response of the groundwater system to changes in withdrawals and recharge. Data needed to define groundwater flow between vertically adjacent water-bearing units is lacking in many areas. Distributions of recharge along losing stream reaches are poorly defined. Extents of aquifers and alluvial lithologies are poorly defined in parts of the Big Chino and Verde Valley sub-basins. Aquifer storage properties are poorly defined throughout most of the study area. Little data exist to define the hydrologic importance of geologic structures such as faults and fractures. Discharge of regional groundwater flow to the Verde River is difficult to identify in the Verde Valley sub-basin because of unknown contributions from deep percolation of excess surface water irrigation.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Hughes, Richard E; Nelson, Nancy A
2009-05-01
A mathematical model was developed for estimating the net present value (NPV) of the cash flow resulting from an investment in an intervention to prevent occupational low back pain (LBP). It combines biomechanics, epidemiology, and finance to give an integrated tool for a firm to use to estimate the investment worthiness of an intervention based on a biomechanical analysis of working postures and hand loads. The model can be used by an ergonomist to estimate the investment worthiness of a proposed intervention. The analysis would begin with a biomechanical evaluation of the current job design and post-intervention job. Economic factors such as hourly labor cost, overhead, workers' compensation costs of LBP claims, and discount rate are combined with the biomechanical analysis to estimate the investment worthiness of the proposed intervention. While this model is limited to low back pain, the simulation framework could be applied to other musculoskeletal disorders. The model uses Monte Carlo simulation to compute the statistical distribution of NPV, and it uses a discrete event simulation paradigm based on four states: (1) working and no history of lost time due to LBP, (2) working and history of lost time due to LBP, (3) lost time due to LBP, and (4) leave job. Probabilities of transitions are based on an extensive review of the epidemiologic review of the low back pain literature. An example is presented.
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Mengistu, Zelalem
2016-12-01
In this study, we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of catchment-scale storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff. The parameters are hence estimated prior to model calibration against runoff. The new storage routine is implemented in the parameter parsimonious distance distribution dynamics (DDD) model and has been tested for 73 catchments in Norway of varying size, mean elevation and landscape type. Runoff simulations for the 73 catchments from two model structures (DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage) were compared. Little loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe efficiency criterion of 0.73 was obtained using the new estimated storage routine compared with 0.75 using calibrated storage routine. The average Kling-Gupta efficiency criterion was 0.80 and 0.81 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recession characteristics was reduced by almost 50 % using the new storage routine. The parameters of the proposed storage routine are found to be significantly correlated to catchment characteristics, which is potentially useful for predictions in ungauged basins.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
NASA Astrophysics Data System (ADS)
Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald
2017-12-01
An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.
ERIC Educational Resources Information Center
Li, Ming; Harring, Jeffrey R.
2017-01-01
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Shahmirzadi, Danial; Li, Ronny X; Konofagou, Elisa E
2012-11-01
Pulse wave imaging (PWI) is an ultrasound-based method for noninvasive characterization of arterial stiffness based on pulse wave propagation. Reliable numerical models of pulse wave propagation in normal and pathological aortas could serve as powerful tools for local pulse wave analysis and a guideline for PWI measurements in vivo. The objectives of this paper are to (1) apply a fluid-structure interaction (FSI) simulation of a straight-geometry aorta to confirm the Moens-Korteweg relationship between the pulse wave velocity (PWV) and the wall modulus, and (2) validate the simulation findings against phantom and in vitro results. PWI depicted and tracked the pulse wave propagation along the abdominal wall of canine aorta in vitro in sequential Radio-Frequency (RF) ultrasound frames and estimates the PWV in the imaged wall. The same system was also used to image multiple polyacrylamide phantoms, mimicking the canine measurements as well as modeling softer and stiffer walls. Finally, the model parameters from the canine and phantom studies were used to perform 3D two-way coupled FSI simulations of pulse wave propagation and estimate the PWV. The simulation results were found to correlate well with the corresponding Moens-Korteweg equation. A high linear correlation was also established between PWV² and E measurements using the combined simulation and experimental findings (R² = 0.98) confirming the relationship established by the aforementioned equation.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
[Estimation of forest canopy chlorophyll content based on PROSPECT and SAIL models].
Yang, Xi-guang; Fan, Wen-yi; Yu, Ying
2010-11-01
The forest canopy chlorophyll content directly reflects the health and stress of forest. The accurate estimation of the forest canopy chlorophyll content is a significant foundation for researching forest ecosystem cycle models. In the present paper, the inversion of the forest canopy chlorophyll content was based on PROSPECT and SAIL models from the physical mechanism angle. First, leaf spectrum and canopy spectrum were simulated by PROSPECT and SAIL models respectively. And leaf chlorophyll content look-up-table was established for leaf chlorophyll content retrieval. Then leaf chlorophyll content was converted into canopy chlorophyll content by Leaf Area Index (LAD). Finally, canopy chlorophyll content was estimated from Hyperion image. The results indicated that the main effect bands of chlorophyll content were 400-900 nm, the simulation of leaf and canopy spectrum by PROSPECT and SAIL models fit better with the measured spectrum with 7.06% and 16.49% relative error respectively, the RMSE of LAI inversion was 0. 542 6 and the forest canopy chlorophyll content was estimated better by PROSPECT and SAIL models with precision = 77.02%.
Real-time 3-D space numerical shake prediction for earthquake early warning
NASA Astrophysics Data System (ADS)
Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang
2017-12-01
In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.
Robert E. Keane; Lisa M. Holsinger; Sarah D. Pratt
2006-01-01
The range and variation of historical landscape dynamics could provide a useful reference for designing fuel treatments on today's landscapes. Simulation modeling is a vehicle that can be used to estimate the range of conditions experienced on historical landscapes. A landscape fire succession model called LANDSUMv4 (LANDscape SUccession Model version 4.0) is...
Contributions of Uncertainty in Droplet Nucleation to the Indirect Effect in Global Models
NASA Astrophysics Data System (ADS)
Rothenberg, D. A.; Wang, C.; Avramov, A.
2016-12-01
Anthropogenic aerosol perturbations to clouds and climate (the indirect effect, or AIE) contribute significant uncertainty towards understanding contemporary climate change. Despite refinements over the past two decades, modern global aerosol-climate models widely disagree on the magnitude of AIE, and wholly disagree with satellite estimates. Part of the spread in estimates of AIE arises from a lack of constraints on what exactly comprised the pre-industrial atmospheric aerosol burden, but another component is attributable to inter-model differences in simulating the chain of aerosol-cloud-precipitation processes which ultimately produce the indirect effect. Thus, one way to help constrain AIE is to thoroughly investigate the differences in aerosol-cloud processes and interactions occurring in these models. We have configured one model, the CESM/MARC, with a suite of parameterizations affecting droplet activation. Each configuration produces similar climatologies with respect to precipitation and cloud macrophysics, but shows different sensitivies to aerosol perturbation - up to 1 W/m^2 differences in AIE. Regional differences in simulated aerosol-cloud interactions, especially in marine regions with little anthropogenic pollution, contribute to the spread in these AIE estimates. The baseline pre-industrial droplet number concentration in marine regions dominated by natural aerosol strongly predicts the magnitude of each model's AIE, suggesting that targeted observations of cloud microphysical properties across different cloud regimes and their sensitivity to aerosol influences could help provide firm constraints and targets for models. Additionally, we have performed supplemental fully-coupled (atmosphere/ocean) simulations with each model configuration, allowing the model to relax to equilibrium following a change in aerosol emissions. These simulations allow us to assess the slower-timescale responses to aerosol perturbations. The spread in fast model responses (which produce the noted changes in indirect effect or forcing) gives rise to large differences in the equilibrium climate state of each configuration. We show that these changes in equilibrium climate state have implications for AIE estimates from model configurations tuned to the present-day climate.
Estimation of power lithium-ion battery SOC based on fuzzy optimal decision
NASA Astrophysics Data System (ADS)
He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin
2018-06-01
In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.
Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee
2015-01-01
Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512
Large historical growth in global terrestrial gross primary production
Campbell, J. E.; Berry, J. A.; Seibt, U.; ...
2017-04-05
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Large historical growth in global terrestrial gross primary production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J. E.; Berry, J. A.; Seibt, U.
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Halford, K.J.
1998-01-01
Ground-water flow through the surficial aquifer system at Naval Station Mayport near Jacksonville, Florida, was simulated with a two-layer finite-difference model as part of an investigation conducted by the U.S. Geological Survey. The model was calibrated to 229 water-level measurements from 181 wells during three synoptic surveys (July 17, 1995; July 31, 1996; and October 24, 1996). A quantifiable understanding of ground-water flow through the surficial aquifer was needed to evaluate remedial-action alternatives under consideration by the Naval Station Mayport to control the possible movement of contaminants from sites on the station. Multi-well aquifer tests, single-well tests, and slug tests were conducted to estimate the hydraulic properties of the surficial aquifer system, which was divided into three geohydrologic units?an S-zone and an I-zone separated by a marsh-muck confining unit. The recharge rate was estimated to range from 4 to 15 inches per year (95 percent confidence limits), based on a chloride-ratio method. Most of the simulations following model calibration were based on a recharge rate of 8 inches per year to unirrigated pervious areas. The advective displacement of saline pore water during the last 200 years was simulated using a particle-tracking routine, MODPATH, applied to calibrated steady-state and transient models of the Mayport peninsula. The surficial aquifer system at Naval Station Mayport has been modified greatly by natural and anthropogenic forces so that the freshwater flow system is expanding and saltwater is being flushed from the system. A new MODFLOW package (VAR1) was written to simulate the temporal variation of hydraulic properties caused by construction activities at Naval Station Mayport. The transiently simulated saltwater distribution after 200 years of displacement described the chloride distribution in the I-zone (determined from measurements made during 1993 and 1996) better than the steady-state simulation. The advective movement of contaminants from selected sites within the solid waste management units to discharge points was simulated using MODPATH. Most of the particles were discharged to the nearest surface-water feature after traveling less than 1,000 feet in the ground-water system. Most areas within 1,000 feet of a surface-water feature or storm sewer had traveltimes of less than 50 years, based on an effective porosity of 40 percent. Contributing areas, traveltimes, and pathlines were identified for 224 wells at Naval Station Mayport under steady-state and transient conditions by back-tracking a particle from the midpoint of the wetted screen of each well. Traveltimes to contributing areas that ranged between 15 and 50 years, estimated by the steady-state model, differed most from the transient traveltime estimates. Estimates of traveltimes and pathlines based on steady-state model results typically were 10 to 20 years more and about twice as long as corresponding estimates from the transient model. The models differed because the steady-state model simulated 1996 conditions when Naval Station Mayport had more impervious surfaces than at any earlier time. The expansion of the impervious surfaces increased the average distance between contributing areas and observation wells.
NASA Astrophysics Data System (ADS)
Quaas, J.; Ming, Y.; Menon, S.; Takemura, T.; Wang, M.; Penner, J. E.; Gettelman, A.; Lohmann, U.; Bellouin, N.; Boucher, O.; Sayer, A. M.; Thomas, G. E.; McComiskey, A.; Feingold, G.; Hoose, C.; Kristjánsson, J. E.; Liu, X.; Balkanski, Y.; Donner, L. J.; Ginoux, P. A.; Stier, P.; Grandey, B.; Feichter, J.; Sednev, I.; Bauer, S. E.; Koch, D.; Grainger, R. G.; Kirkevåg, A.; Iversen, T.; Seland, Ø.; Easter, R.; Ghan, S. J.; Rasch, P. J.; Morrison, H.; Lamarque, J.-F.; Iacono, M. J.; Kinne, S.; Schulz, M.
2009-11-01
Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterises aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (τa) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found that the model-simulated influence of aerosols on cloud droplet number concentration (Nd) compares relatively well to the satellite data at least over the ocean. The relationship between τa and liquid water path is simulated much too strongly by the models. This suggests that the implementation of the second aerosol indirect effect mainly in terms of an autoconversion parameterisation has to be revisited in the GCMs. A positive relationship between total cloud fraction (fcld) and τa as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld-τa relationship, our results indicate that none can be identified as a unique explanation. Relationships similar to the ones found in satellite data between τa and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR-τa relationship show a strong positive correlation between τa and fcld. The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of τa, and parameterisation assumptions such as a lower bound on Nd. Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of -1.5±0.5 Wm-2. In an alternative approach, the radiative flux perturbation due to anthropogenic aerosols can be broken down into a component over the cloud-free portion of the globe (approximately the aerosol direct effect) and a component over the cloudy portion of the globe (approximately the aerosol indirect effect). An estimate obtained by scaling these simulated clear- and cloudy-sky forcings with estimates of anthropogenic τa and satellite-retrieved Nd-τa regression slopes, respectively, yields a global, annual-mean aerosol direct effect estimate of -0.4±0.2 Wm-2 and a cloudy-sky (aerosol indirect effect) estimate of -0.7±0.5 Wm-2, with a total estimate of -1.2±0.4 Wm-2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quaas, Johannes; Ming, Yi; Menon, Surabi
2010-03-12
Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterises aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth ({tau}{sub a}) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is foundmore » that the model-simulated influence of aerosols on cloud droplet number concentration (N{sub d}) compares relatively well to the satellite data at least over the ocean. The relationship between {tau}{sub a} and liquid water path is simulated much too strongly by the models. This suggests that the implementation of the second aerosol indirect effect mainly in terms of an autoconversion parameterisation has to be revisited in the GCMs. A positive relationship between total cloud fraction (f{sub cld}) and {tau}{sub a} as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong f{sub cld} - {tau}{sub a} relationship, our results indicate that none can be identified as a unique explanation. Relationships similar to the ones found in satellite data between {tau}{sub a} and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - {tau}{sub a} relationship show a strong positive correlation between {tau}{sub a} and f{sub cld} The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of {tau}{sub a}, and parameterization assumptions such as a lower bound on N{sub d}. Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of -1.5 {+-} 0.5 Wm{sup -2}. In an alternative approach, the radiative flux perturbation due to anthropogenic aerosols can be broken down into a component over the cloud-free portion of the globe (approximately the aerosol direct effect) and a component over the cloudy portion of the globe (approximately the aerosol indirect effect). An estimate obtained by scaling these simulated clear- and cloudy-sky forcings with estimates of anthropogenic {tau}{sub a} and satellite-retrieved Nd - {tau}{sub a} regression slopes, respectively, yields a global, annual-mean aerosol direct effect estimate of -0.4 {+-} 0.2 Wm{sup -2} and a cloudy-sky (aerosol indirect effect) estimate of -0.7 {+-} 0.5 Wm{sup -2}, with a total estimate of -1.2 {+-} 0.4 Wm{sup -2}.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Dale A.
This model description is supplemental to the Lawrence Livermore National Laboratory (LLNL) report LLNL-TR-642494, Technoeconomic Evaluation of MEA versus Mixed Amines for CO2 Removal at Near- Commercial Scale at Duke Energy Gibson 3 Plant. We describe the assumptions and methodology used in the Laboratory’s simulation of its understanding of Huaneng’s novel amine solvent for CO2 capture with 35% mixed amine. The results of that simulation have been described in LLNL-TR-642494. The simulation was performed using ASPEN 7.0. The composition of the Huaneng’s novel amine solvent was estimated based on information gleaned from Huaneng patents. The chemistry of the process wasmore » described using nine equations, representing reactions within the absorber and stripper columns using the ELECTNRTL property method. As a rate-based ASPEN simulation model was not available to Lawrence Livermore at the time of writing, the height of a theoretical plate was estimated using open literature for similar processes. Composition of the flue gas was estimated based on information supplied by Duke Energy for Unit 3 of the Gibson plant. The simulation was scaled at one million short tons of CO2 absorbed per year. To aid stability of the model, convergence of the main solvent recycle loop was implemented manually, as described in the Blocks section below. Automatic convergence of this loop led to instability during the model iterations. Manual convergence of the loop enabled accurate representation and maintenance of model stability.« less
Peressutti, Devis; Penney, Graeme P; Housden, R James; Kolbitsch, Christoph; Gomez, Alberto; Rijkhorst, Erik-Jan; Barratt, Dean C; Rhode, Kawal S; King, Andrew P
2013-05-01
In image-guided cardiac interventions, respiratory motion causes misalignments between the pre-procedure roadmap of the heart used for guidance and the intra-procedure position of the heart, reducing the accuracy of the guidance information and leading to potentially dangerous consequences. We propose a novel technique for motion-correcting the pre-procedural information that combines a probabilistic MRI-derived affine motion model with intra-procedure real-time 3D echocardiography (echo) images in a Bayesian framework. The probabilistic model incorporates a measure of confidence in its motion estimates which enables resolution of the potentially conflicting information supplied by the model and the echo data. Unlike models proposed so far, our method allows the final motion estimate to deviate from the model-produced estimate according to the information provided by the echo images, so adapting to the complex variability of respiratory motion. The proposed method is evaluated using gold-standard MRI-derived motion fields and simulated 3D echo data for nine volunteers and real 3D live echo images for four volunteers. The Bayesian method is compared to 5 other motion estimation techniques and results show mean/max improvements in estimation accuracy of 10.6%/18.9% for simulated echo images and 20.8%/41.5% for real 3D live echo data, over the best comparative estimation method. Copyright © 2013 Elsevier B.V. All rights reserved.
Lantry, B.F.; Rudstam, L. G.; Forney, J.L.; VanDeValk, A.J.; Mills, E.L.; Stewart, D.J.; Adams, J.V.
2008-01-01
Daily consumption was estimated from the stomach contents of walleyes Sander vitreus collected weekly from Oneida Lake, New York, during June-October 1975, 1992, 1993, and 1994 for one to four age-groups per year. Field rations were highly variable between weeks, and trends in ration size varied both seasonally and annually. The coefficient of variation for weekly field rations within years and ages ranged from 45% to 97%. Field estimates were compared with simulated consumption from a bioenergetics model. The simulation averages of daily ration deviated from those of the field estimates by -20.1% to +70.3%, with a mean across all simulations of +14.3%. The deviations for each time step were much greater than those for the simulation averages, ranging from -92.8% to +363.6%. A systematic trend in the deviations was observed, the model producing overpredictions at rations less than 3.7% of body weight. Analysis of variance indicated that the deviations were affected by sample year and week but not age. Multiple linear regression using backwards selection procedures and Akaike's information criterion indicated that walleye weight, walleye growth, lake temperature, prey energy density, and the proportion of gizzard shad Dorosoma cepedianum in the diet significantly affected the deviations between simulated and field rations and explained 32% of the variance. ?? Copyright by the American Fisheries Society 2008.
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the convolution-based organ dose estimation method with two other strategies with different approaches of quantifying the irradiation field. The proposed convolution-based estimation method showed good accuracy with the organ dose simulated using the TCM Monte Carlo simulation. The average percentage error (normalized by CTDIvol) was generally within 10% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. This study developed an improved method that accurately quantifies the irradiation field under TCM scans. The results suggested that organ dose could be estimated in real-time both prospectively (with the localizer information only) and retrospectively (with acquired CT data).
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
Jung, Kwang-Wook; Yoon, Choon-G; Jang, Jae-Ho; Kong, Dong-Soo
2008-01-01
Effective watershed management often demands qualitative and quantitative predictions of the effect of future management activities as arguments for policy makers and administration. The BASINS geographic information system was developed to compute total maximum daily loads, which are helpful to establish hydrological process and water quality modeling system. In this paper the BASINS toolkit HSPF model is applied in 20,271 km(2) large watershed of the Han River Basin is used for applicability of HSPF and BMPs scenarios. For proper evaluation of watershed and stream water quality, comprehensive estimation methods are necessary to assess large amounts of point source and nonpoint-source (NPS) pollution based on the total watershed area. In this study, The Hydrological Simulation Program-FORTRAN (HSPF) was estimated to simulate watershed pollutant loads containing dam operation and applied BMPs scenarios for control NPS pollution. The 8-day monitoring data (about three years) were used in the calibration and verification processes. Model performance was in the range of "very good" and "good" based on percent difference. The water-quality simulation results were encouraging for this large sizable watershed with dam operation practice and mixed land uses; HSPF proved adequate, and its application is recommended to simulate watershed processes and BMPs evaluation. IWA Publishing 2008.
Tsukahara, Y; Oishi, K; Hirooka, H
2011-12-01
A deterministic simulation model was developed to estimate biological production efficiency and to evaluate goat crossbreeding systems under tropical conditions. The model involves 5 production systems: pure indigenous, first filial generations (F1), backcross (BC), composite breeds of F1 (CMP(F1)), and BC (CMP(BC)). The model first simulates growth, reproduction, lactation, and energy intakes of a doe and a kid on a 1-d time step at the individual level and thereafter the outputs are integrated into the herd dynamics program. The ability of the model to simulate individual performances was tested under a base situation. The simulation results represented daily BW changes, ME requirements, and milk yield and the estimates were within the range of published data. Two conventional goat production scenarios (an intensive milk production scenario and an integrated goat and oil palm production scenario) in Malaysia were examined. The simulation results of the intensive milk production scenario showed the greater production efficiency of the CMP(BC) and CMP(F1) systems and decreased production efficiency of the F1 and BC systems. The results of the integrated goat and oil palm production scenario showed that the production efficiency and stocking rate were greater for the indigenous goats than for the crossbreeding systems.
Observed and Simulated Eddy Diffusivity Upstream of the Drake Passage
NASA Astrophysics Data System (ADS)
Tulloch, R.; Ferrari, R. M.; Marshall, J.
2012-12-01
Estimates of eddy diffusivity in the Southern Ocean are poorly constrained due to lack of observations. We compare the first direct estimate of isopycnal eddy diffusivity upstream of the Drake Passage (from Ledwell et al. 2011) with a numerical simulation. The estimate is computed from a point tracer release as part of the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES). We find that the observational diffusivity estimate of about 500m^2/s at 1500m depth is close to that computed in a data-constrained, 1/20th of a degree simulation of the Drake Passage region. This tracer estimate also agrees with Lagrangian float calculations in the model. The role of mean flow suppression of eddy diffusivity at shallower depths will also be discussed.
A Hydrological Modeling Framework for Flood Risk Assessment for Japan
NASA Astrophysics Data System (ADS)
Ashouri, H.; Chinnayakanahalli, K.; Chowdhary, H.; Sen Gupta, A.
2016-12-01
Flooding has been the most frequent natural disaster that claims lives and imposes significant economic losses to human societies worldwide. Japan, with an annual rainfall of up to approximately 4000 mm is extremely vulnerable to flooding. The focus of this research is to develop a macroscale hydrologic model for simulating flooding toward an improved understanding and assessment of flood risk across Japan. The framework employs a conceptual hydrological model, known as the Probability Distributed Model (PDM), as well as the Muskingum-Cunge flood routing procedure for simulating streamflow. In addition, a Temperature-Index model is incorporated to account for snowmelt and its contribution to streamflow. For an efficient calibration of the model, in terms of computational timing and convergence of the parameters, a set of A Priori parameters is obtained based on the relationships between the model parameters and the physical properties of watersheds. In this regard, we have implemented a particle tracking algorithm and a statistical model which use high resolution Digital Terrain Models to estimate different time related parameters of the model such as time to peak of the unit hydrograph. In addition, global soil moisture and depth data are used to generate A Priori estimation of maximum soil moisture capacity, an important parameter of the PDM model. Once the model is calibrated, its performance is examined during the Typhoon Nabi which struck Japan in September 2005 and caused severe flooding throughout the country. The model is also validated for the extreme precipitation event in 2012 which affected Kyushu. In both cases, quantitative measures show that simulated streamflow depicts good agreement with gauge-based observations. The model is employed to simulate thousands of possible flood events for the entire Japan which makes a basis for a comprehensive flood risk assessment and loss estimation for the flood insurance industry.
Full-envelope aerodynamic modeling of the Harrier aircraft
NASA Technical Reports Server (NTRS)
Mcnally, B. David
1986-01-01
A project to identify a full-envelope model of the YAV-8B Harrier using flight-test and parameter identification techniques is described. As part of the research in advanced control and display concepts for V/STOL aircraft, a full-envelope aerodynamic model of the Harrier is identified, using mathematical model structures and parameter identification methods. A global-polynomial model structure is also used as a basis for the identification of the YAV-8B aerodynamic model. State estimation methods are used to ensure flight data consistency prior to parameter identification.Equation-error methods are used to identify model parameters. A fixed-base simulator is used extensively to develop flight test procedures and to validate parameter identification software. Using simple flight maneuvers, a simulated data set was created covering the YAV-8B flight envelope from about 0.3 to 0.7 Mach and about -5 to 15 deg angle of attack. A singular value decomposition implementation of the equation-error approach produced good parameter estimates based on this simulated data set.
a Computer Simulation Study of Coherent Optical Fibre Communication Systems
NASA Astrophysics Data System (ADS)
Urey, Zafer
Available from UMI in association with The British Library. A computer simulation study of coherent optical fibre communication systems is presented in this thesis. The Wiener process is proposed as the simulation model of laser phase noise and verified to be a good one. This model is included in the simulation experiments along with the other noise sources (i.e shot noise, thermal noise and laser intensity noise) and the models that represent the various waveform processing blocks in a system such as filtering, demodulation, etc. A novel mixed-semianalytical simulation procedure is designed and successfully applied for the estimation of bit error rates as low as 10^{-10 }. In this technique the noise processes and the ISI effects at the decision time are characterized from simulation experiments but the calculation of the probability of error is obtained by numerically integrating the noise statistics over the error region using analytical expressions. Simulation of only 4096 bits is found to give estimates of BER's corresponding to received optical power within 1 dB of the theoretical calculations using this approach. This number is very small when compared with the pure simulation techniques. Hence, the technique is proved to be very efficient in terms of the computation time and the memory requirements. A command driven simulation software which runs on a DEC VAX computer under the UNIX operating system is written by the author and a series of simulation experiments are carried out using this software. In particular, the effects of IF filtering on the performance of PSK heterodyne receivers with synchronous demodulation are examined when both the phase noise and the shot noise are included in the simulations. The BER curves of this receiver are estimated for the first time for various cases of IF filtering using the mixed-semianalytical approach. At a power penalty of 1 dB the IF linewidth requirement of this receiver with the matched filter is estimated to be less than 650 kHz at the modulation rate of 1 Gbps and BER of 10 ^{-9}. The IF linewidth requirement for other IF filtering cases are also estimated. The results are not found to be much different from the matched filter case. Therefore, it is concluded that IF filtering does not have any effect for the reduction of phase noise in PSK heterodyne systems with synchronous demodulation.
Testing simulation and structural models with applications to energy demand
NASA Astrophysics Data System (ADS)
Wolff, Hendrik
2007-12-01
This dissertation deals with energy demand and consists of two parts. Part one proposes a unified econometric framework for modeling energy demand and examples illustrate the benefits of the technique by estimating the elasticity of substitution between energy and capital. Part two assesses the energy conservation policy of Daylight Saving Time and empirically tests the performance of electricity simulation. In particular, the chapter "Imposing Monotonicity and Curvature on Flexible Functional Forms" proposes an estimator for inference using structural models derived from economic theory. This is motivated by the fact that in many areas of economic analysis theory restricts the shape as well as other characteristics of functions used to represent economic constructs. Specific contributions are (a) to increase the computational speed and tractability of imposing regularity conditions, (b) to provide regularity preserving point estimates, (c) to avoid biases existent in previous applications, and (d) to illustrate the benefits of our approach via numerical simulation results. The chapter "Can We Close the Gap between the Empirical Model and Economic Theory" discusses the more fundamental question of whether the imposition of a particular theory to a dataset is justified. I propose a hypothesis test to examine whether the estimated empirical model is consistent with the assumed economic theory. Although the proposed methodology could be applied to a wide set of economic models, this is particularly relevant for estimating policy parameters that affect energy markets. This is demonstrated by estimating the Slutsky matrix and the elasticity of substitution between energy and capital, which are crucial parameters used in computable general equilibrium models analyzing energy demand and the impacts of environmental regulations. Using the Berndt and Wood dataset, I find that capital and energy are complements and that the data are significantly consistent with duality theory. Both results would not necessarily be achieved using standard econometric methods. The final chapter "Daylight Time and Energy" uses a quasi-experiment to evaluate a popular energy conservation policy: we challenge the conventional wisdom that extending Daylight Saving Time (DST) reduces energy demand. Using detailed panel data on half-hourly electricity consumption, prices, and weather conditions from four Australian states we employ a novel 'triple-difference' technique to test the electricity-saving hypothesis. We show that the extension failed to reduce electricity demand and instead increased electricity prices. We also apply the most sophisticated electricity simulation model available in the literature to the Australian data. We find that prior simulation models significantly overstate electricity savings. Our results suggest that extending DST will fail as an instrument to save energy resources.
Atmospheric Turbulence Estimates from a Pulsed Lidar
NASA Technical Reports Server (NTRS)
Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.
2013-01-01
Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.
Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L
2016-10-01
Three-dimensional (3-D) strain estimation might improve the detection and localization of high strain regions in the carotid artery (CA) for identification of vulnerable plaques. This paper compares 2-D versus 3-D displacement estimation in terms of radial and circumferential strain using simulated ultrasound (US) images of a patient-specific 3-D atherosclerotic CA model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on the literature data. A Philips L11-3 linear array transducer was simulated, which transmitted plane waves at three alternating angles at a pulse repetition rate of 10 kHz. Interframe (IF) radio-frequency US data were simulated in Field II for 191 equally spaced longitudinal positions of the internal CA. Accumulated radial and circumferential displacements were estimated using tracking of the IF displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least-squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2-D and 3-D methods was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3-D displacement estimation for the entire cardiac cycle. The 3-D technique clearly outperformed the 2-D technique in phases with high IF longitudinal motion. In fact, the large IF longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2-D technique.
Linear mixed model for heritability estimation that explicitly addresses environmental variation.
Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S
2016-07-05
The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.
Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J
2016-12-08
The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo
2017-01-01
Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554
Hamilton, Matthew B; Tartakovsky, Maria; Battocletti, Amy
2018-05-01
The genetic effective population size, N e , can be estimated from the average gametic disequilibrium (r2^) between pairs of loci, but such estimates require evaluation of assumptions and currently have few methods to estimate confidence intervals. speed-ne is a suite of matlab computer code functions to estimate Ne^ from r2^ with a graphical user interface and a rich set of outputs that aid in understanding data patterns and comparing multiple estimators. speed-ne includes functions to either generate or input simulated genotype data to facilitate comparative studies of Ne^ estimators under various population genetic scenarios. speed-ne was validated with data simulated under both time-forward and time-backward coalescent models of genetic drift. Three classes of estimators were compared with simulated data to examine several general questions: what are the impacts of microsatellite null alleles on Ne^, how should missing data be treated, and does disequilibrium contributed by reduced recombination among some loci in a sample impact Ne^. Estimators differed greatly in precision in the scenarios examined, and a widely employed Ne^ estimator exhibited the largest variances among replicate data sets. speed-ne implements several jackknife approaches to estimate confidence intervals, and simulated data showed that jackknifing over loci and jackknifing over individuals provided ~95% confidence interval coverage for some estimators and should be useful for empirical studies. speed-ne provides an open-source extensible tool for estimation of Ne^ from empirical genotype data and to conduct simulations of both microsatellite and single nucleotide polymorphism (SNP) data types to develop expectations and to compare Ne^ estimators. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Cunha, J. S.; Cavalcante, F. R.; Souza, S. O.; Souza, D. N.; Santos, W. S.; Carvalho Júnior, A. B.
2017-11-01
One of the main criteria that must be held in Total Body Irradiation (TBI) is the uniformity of dose in the body. In TBI procedures the certification that the prescribed doses are absorbed in organs is made with dosimeters positioned on the patient skin. In this work, we modelled TBI scenarios in the MCNPX code to estimate the entrance dose rate in the skin for comparison and validation of simulations with experimental measurements from literature. Dose rates were estimated simulating an ionization chamber laterally positioned on thorax, abdomen, leg and thigh. Four exposure scenarios were simulated: ionization chamber (S1), TBI room (S2), and patient represented by hybrid phantom (S3) and water stylized phantom (S4) in sitting posture. The posture of the patient in experimental work was better represented by S4 compared with hybrid phantom, and this led to minimum and maximum percentage differences of 1.31% and 6.25% to experimental measurements for thorax and thigh regions, respectively. As for all simulations reported here the percentage differences in the estimated dose rates were less than 10%, we considered that the obtained results are consistent with experimental measurements and the modelled scenarios are suitable to estimate the absorbed dose in organs during TBI procedure.
AN ENVIRONMENTAL SIMULATION MODEL FOR TRANSPORT AND FATE OF MERCURY IN SMALL RURAL CATCHMENTS
The development of an extensively modified version of the environmental model GLEAMS to simulate fate and transport of mercury in small catchments is presented. Methods for parameter estimation are proposed and in some cases simple relationships for mercury processes are derived....
Sequential Computerized Mastery Tests--Three Simulation Studies
ERIC Educational Resources Information Center
Wiberg, Marie
2006-01-01
A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…
NASA Technical Reports Server (NTRS)
Philip, Sajeev; Johnson, Matthew S.
2018-01-01
Atmospheric mixing ratios of carbon dioxide (CO2) are largely controlled by anthropogenic emissions and biospheric fluxes. The processes controlling terrestrial biosphere-atmosphere carbon exchange are currently not fully understood, resulting in terrestrial biospheric models having significant differences in the quantification of biospheric CO2 fluxes. Atmospheric transport models assimilating measured (in situ or space-borne) CO2 concentrations to estimate "top-down" fluxes, generally use these biospheric CO2 fluxes as a priori information. Most of the flux inversion estimates result in substantially different spatio-temporal posteriori estimates of regional and global biospheric CO2 fluxes. The Orbiting Carbon Observatory 2 (OCO-2) satellite mission dedicated to accurately measure column CO2 (XCO2) allows for an improved understanding of global biospheric CO2 fluxes. OCO-2 provides much-needed CO2 observations in data-limited regions facilitating better global and regional estimates of "top-down" CO2 fluxes through inversion model simulations. The specific objectives of our research are to: 1) conduct GEOS-Chem 4D-Var assimilation of OCO-2 observations, using several state-of-the-science biospheric CO2 flux models as a priori information, to better constrain terrestrial CO2 fluxes, and 2) quantify the impact of different biospheric model prior fluxes on OCO-2-assimilated a posteriori CO2 flux estimates. Here we present our assessment of the importance of these a priori fluxes by conducting Observing System Simulation Experiments (OSSE) using simulated OCO-2 observations with known "true" fluxes.
A Simple Model of Global Aerosol Indirect Effects
NASA Technical Reports Server (NTRS)
Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter
2013-01-01
Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.
Lee, Sanghun; Park, Sung Soo
2011-11-03
Dielectric constants of electrolytic organic solvents are calculated employing nonpolarizable Molecular Dynamics simulation with Electronic Continuum (MDEC) model and Density Functional Theory. The molecular polarizabilities are obtained by the B3LYP/6-311++G(d,p) level of theory to estimate high-frequency refractive indices while the densities and dipole moment fluctuations are computed using nonpolarizable MD simulations. The dielectric constants reproduced from these procedures are evaluated to provide a reliable approach for estimating the experimental data. An additional feature, two representative solvents which have similar molecular weights but are different dielectric properties, i.e., ethyl methyl carbonate and propylene carbonate, are compared using MD simulations and the distinctly different dielectric behaviors are observed at short times as well as at long times.
On the importance of avoiding shortcuts in applying cognitive models to hierarchical data.
Boehm, Udo; Marsman, Maarten; Matzke, Dora; Wagenmakers, Eric-Jan
2018-06-12
Psychological experiments often yield data that are hierarchically structured. A number of popular shortcut strategies in cognitive modeling do not properly accommodate this structure and can result in biased conclusions. To gauge the severity of these biases, we conducted a simulation study for a two-group experiment. We first considered a modeling strategy that ignores the hierarchical data structure. In line with theoretical results, our simulations showed that Bayesian and frequentist methods that rely on this strategy are biased towards the null hypothesis. Secondly, we considered a modeling strategy that takes a two-step approach by first obtaining participant-level estimates from a hierarchical cognitive model and subsequently using these estimates in a follow-up statistical test. Methods that rely on this strategy are biased towards the alternative hypothesis. Only hierarchical models of the multilevel data lead to correct conclusions. Our results are particularly relevant for the use of hierarchical Bayesian parameter estimates in cognitive modeling.
Measuring global monopole velocities, one by one
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl
We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less
Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2018-02-01
The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
Estimation of Time-Varying Pilot Model Parameters
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2011-01-01
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
Influence of model grid size on the simulation of PM2.5 and the related excess mortality in Japan
NASA Astrophysics Data System (ADS)
Goto, D.; Ueda, K.; Ng, C. F.; Takami, A.; Ariga, T.; Matsuhashi, K.; Nakajima, T.
2016-12-01
Aerosols, especially PM2.5, can affect air pollution, climate change, and human health. The estimation of health impacts due to PM2.5 is often performed using global and regional aerosol transport models with various horizontal resolutions. To investigate the dependence of the simulated PM2.5 on model grid sizes, we executed two simulations using a high-resolution model ( 10km; HRM) and a low-resolution model ( 100km; LRM, which is a typical value for general circulation models). In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a stretched grid system in HRM and a uniform grid system in LRM for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). These calculations were performed by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 for the elderly. Results showed the LRM underestimated by approximately 30 % (of PM2.5 concentrations in the 2000 and 2030), approximately 60 % (excess mortality in the 2000) and approximately 90 % (excess mortality in 2030) compared to the HRM results. The estimation of excess mortality therefore performed better with high-resolution grid sizes. In addition, we also found that our nesting method could be a useful tool to obtain better estimation results.
Simulating intrafraction prostate motion with a random walk model.
Pommer, Tobias; Oh, Jung Hun; Munck Af Rosenschöld, Per; Deasy, Joseph O
2017-01-01
Prostate motion during radiation therapy (ie, intrafraction motion) can cause unwanted loss of radiation dose to the prostate and increased dose to the surrounding organs at risk. A compact but general statistical description of this motion could be useful for simulation of radiation therapy delivery or margin calculations. We investigated whether prostate motion could be modeled with a random walk model. Prostate motion recorded during 548 radiation therapy fractions in 17 patients was analyzed and used for input in a random walk prostate motion model. The recorded motion was categorized on the basis of whether any transient excursions (ie, rapid prostate motion in the anterior and superior direction followed by a return) occurred in the trace and transient motion. This was separately modeled as a large step in the anterior/superior direction followed by a returning large step. Random walk simulations were conducted with and without added artificial transient motion using either motion data from all observed traces or only traces without transient excursions as model input, respectively. A general estimate of motion was derived with reasonable agreement between simulated and observed traces, especially during the first 5 minutes of the excursion-free simulations. Simulated and observed diffusion coefficients agreed within 0.03, 0.2 and 0.3 mm 2 /min in the left/right, superior/inferior, and anterior/posterior directions, respectively. A rapid increase in variance at the start of observed traces was difficult to reproduce and seemed to represent the patient's need to adjust before treatment. This could be estimated somewhat using artificial transient motion. Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.
NASA Astrophysics Data System (ADS)
Lee, Jongyeol; Kim, Moonil; Lakyda, Ivan; Pietsch, Stephan; Shvidenko, Anatoly; Kraxner, Florian; Forsell, Nicklas; Son, Yowhan
2016-04-01
There have been demands on reporting national forest carbon (C) inventories to mitigate global climate change. Global forestry models estimate growth of stem volume and C at various spatial and temporal scales but they do not consider dead organic matter (DOM) C. In this study, we simulated national forest C dynamics in South Korea with a calibrated global forestry model (G4M model) and a module of DOM C dynamics in Korean forest C model (FBDC model). 3890 simulation units (1-16 km2) were established in entire South Korea. Growth functions of stem for major tree species (Pinus densiflora, P. rigida, Larix kaempferi, Quercus variabilis, Q. mongolica, and Q. acutissima) were estimated by internal mechanism of G4M model and Korean yield tables. C dynamics in DOMs were determined by balance between input and output (decomposition) of DOMs in the FBDC model. Annual input of DOM was estimated by multiplying C stock of biomass compartment with turnover rate. Decomposition of DOM was estimated by C stock of DOM, mean air temperature, and decay rate. C stock in each C pool was initialized by spin-up process with consideration of severe deforestation by Japanese exploitation and Korean War. No disturbance was included in the simulation process. Total forest C stock (Tg C) and mean C density (Mg C ha-1) decreased from 657.9 and 112.1 in 1954 to 607.2 and 103.4 in 1973. Especially, C stock in mineral soil decreased at a rate of 0.5 Mg C ha-1 yr-1 during the period due to suppression of regeneration. However, total forest C stock (Tg C) and mean C density (Mg C ha-1) gradually increased from 607.0 and 103.4 in 1974 to 1240.7 and 211.3 in 2015 due to the national reforestation program since 1973. After the reforestation program, Korean forests became C sinks. Model estimates were also verified by comparison of these estimates and national forest inventory data (2006-2010). High similarity between the model estimates and the inventory data showed a reliability of down-scaled global forestry model and integration of DOM C module. Finally, total C stock gradually increased to 1749.8 Tg C in 2050 at a rate of 2.5 Tg C yr-1 and it might be attributed to mature of forest. However, total forest C stock might be overestimated in the future due to the exclusion of disturbance in simulation. This study was supported by Korea Forest Service (S111315L100120) and Korean Ministry of Environment (2014001310008).
NASA Technical Reports Server (NTRS)
Kimball, John; Kang, Sinkyu
2003-01-01
The original objectives of this proposed 3-year project were to: 1) quantify the respective contributions of land cover and disturbance (i.e., wild fire) to uncertainty associated with regional carbon source/sink estimates produced by a variety of boreal ecosystem models; 2) identify the model processes responsible for differences in simulated carbon source/sink patterns for the boreal forest; 3) validate model outputs using tower and field- based estimates of NEP and NPP; and 4) recommend/prioritize improvements to boreal ecosystem carbon models, which will better constrain regional source/sink estimates for atmospheric C02. These original objectives were subsequently distilled to fit within the constraints of a 1 -year study. This revised study involved a regional model intercomparison over the BOREAS study region involving Biome-BGC, and TEM (A.D. McGuire, UAF) ecosystem models. The major focus of these revised activities involved quantifying the sensitivity of regional model predictions associated with land cover classification uncertainties. We also evaluated the individual and combined effects of historical fire activity, historical atmospheric CO2 concentrations, and climate change on carbon and water flux simulations within the BOREAS study region.
ERIC Educational Resources Information Center
Estabrook, Ryne; Neale, Michael
2013-01-01
Factor score estimation is a controversial topic in psychometrics, and the estimation of factor scores from exploratory factor models has historically received a great deal of attention. However, both confirmatory factor models and the existence of missing data have generally been ignored in this debate. This article presents a simulation study…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertholon, François; Harant, Olivier; Bourlon, Bertrand
This article introduces a joined Bayesian estimation of gas samples issued from a gas chromatography column (GC) coupled with a NEMS sensor based on Giddings Eyring microscopic molecular stochastic model. The posterior distribution is sampled using a Monte Carlo Markov Chain and Gibbs sampling. Parameters are estimated using the posterior mean. This estimation scheme is finally applied on simulated and real datasets using this molecular stochastic forward model.
ERIC Educational Resources Information Center
Gugel, John F.
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations
NASA Astrophysics Data System (ADS)
Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod
2016-11-01
Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.
Model and parametric uncertainty in source-based kinematic models of earthquake ground motion
Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur
2011-01-01
Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.
NASA Astrophysics Data System (ADS)
Tsumune, D.; Tsubono, T.; Aoyama, M.; Misumi, K.; Tateda, Y.
2015-12-01
A series of accidents at the Fukushima Dai-ichi Nuclear Power Plant (1F NPP) following the earthquake and tsunami of 11 March 2011 resulted in the release of radioactive materials to the ocean by two major pathways, direct release from the accident site and atmospheric deposition.We reconstructed spatiotemporal variability of 137Cs activity in the regional ocean for four years by numerical model, such as a regional scale and the North Pacific scale oceanic dispersion models, an atmospheric transport model, a sediment transport model, a dynamic biological compartment model for marine biota and river runoff model. Direct release rate of 137Cs were estimated for four years after the accident by comparing simulated results and observed activities very close to the site. The estimated total amounts of directly release was 3.6±0.7 PBq. Directly release rate of 137Cs decreased exponentially with time by the end of December 2012 and then, was almost constant. Decrease rate were quite small after 2013. The daily release rate of 137Cs was estimated to be the order of magnitude of 1010 Bq/day by the end of March 2015. The activity of directly released 137Cs was detectable only in the coastal zone after December 2012. Simulated 137Cs activities attributable to direct release were in good agreement with observed activities, a result that implies the estimated direct release rate was reasonable. There is no observed data of 137Cs activity in the ocean from 11 to 21 March 2011. Observed data of marine biota should reflect the history of 137Cs activity in this early period. We reconstructed the history of 137Cs activity in this early period by considering atmospheric deposition, river input, rain water runoff from the 1F NPP site. The comparisons between simulated 137Cs activity of marine biota by a dynamic biological compartment and observed data also suggest that simulated 137Cs activity attributable to atmospheric deposition was underestimated in this early period. The simulated river flux of 137Cs to the ocean did not effect on 137Cs activity in the ocean even if the parameters in this simulation have uncertainties because of the lack of observed data in rivers in the earlier period.
NASA Astrophysics Data System (ADS)
Torn, M. S.; Koven, C. D.; Riley, W. J.; Zhu, B.; Hicks Pries, C.; Phillips, C. L.
2014-12-01
A series of accidents at the Fukushima Dai-ichi Nuclear Power Plant (1F NPP) following the earthquake and tsunami of 11 March 2011 resulted in the release of radioactive materials to the ocean by two major pathways, direct release from the accident site and atmospheric deposition.We reconstructed spatiotemporal variability of 137Cs activity in the regional ocean for four years by numerical model, such as a regional scale and the North Pacific scale oceanic dispersion models, an atmospheric transport model, a sediment transport model, a dynamic biological compartment model for marine biota and river runoff model. Direct release rate of 137Cs were estimated for four years after the accident by comparing simulated results and observed activities very close to the site. The estimated total amounts of directly release was 3.6±0.7 PBq. Directly release rate of 137Cs decreased exponentially with time by the end of December 2012 and then, was almost constant. Decrease rate were quite small after 2013. The daily release rate of 137Cs was estimated to be the order of magnitude of 1010 Bq/day by the end of March 2015. The activity of directly released 137Cs was detectable only in the coastal zone after December 2012. Simulated 137Cs activities attributable to direct release were in good agreement with observed activities, a result that implies the estimated direct release rate was reasonable. There is no observed data of 137Cs activity in the ocean from 11 to 21 March 2011. Observed data of marine biota should reflect the history of 137Cs activity in this early period. We reconstructed the history of 137Cs activity in this early period by considering atmospheric deposition, river input, rain water runoff from the 1F NPP site. The comparisons between simulated 137Cs activity of marine biota by a dynamic biological compartment and observed data also suggest that simulated 137Cs activity attributable to atmospheric deposition was underestimated in this early period. The simulated river flux of 137Cs to the ocean did not effect on 137Cs activity in the ocean even if the parameters in this simulation have uncertainties because of the lack of observed data in rivers in the earlier period.
NASA Astrophysics Data System (ADS)
Yano, S.; Kondo, H.; Tawara, Y.; Yamada, T.; Mori, K.; Yoshida, A.; Tada, K.; Tsujimura, M.; Tokunaga, T.
2017-12-01
It is important to understand groundwater systems, including their recharge, flow, storage, discharge, and withdrawal, so that we can use groundwater resources efficiently and sustainably. To examine groundwater recharge, several methods have been discussed based on water balance estimation, in situ experiments, and hydrological tracers. However, few studies have developed a concrete framework for quantifying groundwater recharge rates in an undefined area. In this study, we established a robust method to quantitatively determine water cycles and estimate the groundwater recharge rate by combining the advantages of field surveys and model simulations. We replicated in situ hydrogeological observations and three-dimensional modeling in a mountainous basin area in Japan. We adopted a general-purpose terrestrial fluid-flow simulator (GETFLOWS) to develop a geological model and simulate the local water cycle. Local data relating to topology, geology, vegetation, land use, climate, and water use were collected from the existing literature and observations to assess the spatiotemporal variations of the water balance from 2011 to 2013. The characteristic structures of geology and soils, as found through field surveys, were parameterized for incorporation into the model. The simulated results were validated using observed groundwater levels and resulted in a Nash-Sutcliffe Model Efficiency Coefficient of 0.92. The results suggested that local groundwater flows across the watershed boundary and that the groundwater recharge rate, defined as the flux of water reaching the local unconfined groundwater table, has values similar to the level estimated in the `the lower soil layers on a long-term basis. This innovative method enables us to quantify the groundwater recharge rate and its spatiotemporal variability with high accuracy, which contributes to establishing a foundation for sustainable groundwater management.
Urban air quality estimation study, phase 1
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1976-01-01
Possibilities are explored for applying estimation theory to the analysis, interpretation, and use of air quality measurements in conjunction with simulation models to provide a cost effective method of obtaining reliable air quality estimates for wide urban areas. The physical phenomenology of real atmospheric plumes from elevated localized sources is discussed. A fluctuating plume dispersion model is derived. Individual plume parameter formulations are developed along with associated a priori information. Individual measurement models are developed.
NASA Astrophysics Data System (ADS)
Kao, S. C.; Shi, X.; Kumar, J.; Ricciuto, D. M.; Mao, J.; Thornton, P. E.
2017-12-01
With the concern of changing hydrologic regime, there is a crucial need to better understand how water availability may change and influence water management decisions in the projected future climate conditions. Despite that surface hydrology has long been simulated by land model within the Earth System modeling (ESM) framework, given the coarser horizontal resolution and lack of engineering-level calibration, raw runoff from ESM is generally discarded by water resource managers when conducting hydro-climate impact assessments. To identify a likely path to improve the credibility of ESM-simulated natural runoff, we conducted regional model simulation using the land component (ALM) of the Accelerated Climate Modeling for Energy (ACME) version 1 focusing on the conterminous United States (CONUS). Two very different forcing data sets, including (1) the conventional 0.5° CRUNCEP (v5, 1901-2013) and (2) the 1-km Daymet (v3, 1980-2013) aggregated to 0.5°, were used to conduct 20th century transient simulation with satellite phenology. Additional meteorologic and hydrologic observations, including PRISM precipitation and U.S. Geological Survey WaterWatch runoff, were used for model evaluation. For various CONUS hydrologic regions (such as Pacific Northwest), we found that Daymet can significantly improve the reasonableness of simulated ALM runoff even without intensive calibration. The large dry bias of CRUNCEP precipitation (evaluated by PRISM) in multiple CONUS hydrologic regions is believed to be the main reason causing runoff underestimation. The results suggest that when driving with skillful precipitation estimates, ESM has the ability to produce reasonable natural runoff estimates to support further water management studies. Nevertheless, model calibration will be required for regions (such as Upper Colorado) where ill performance is showed for multiple different forcings.
Modification of the TASMIP x-ray spectral model for the simulation of microfocus x-ray sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es; Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007
2014-01-15
Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modifiedmore » to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in line with those reported for other models for radiology or mammography. Conclusions: A new version of the TASMIP model for the estimation of x-ray spectra in microfocus x-ray sources has been developed and validated experimentally. Similarly to other versions of TASMIP, the estimation of spectra is very simple, involving only the evaluation of polynomial expressions.« less
Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles
2015-01-01
This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities. PMID:26958606
Robel, G.L.; Fisher, W.L.
1999-01-01
Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.
Improved Parameter-Estimation With MRI-Constrained PET Kinetic Modeling: A Simulation Study
NASA Astrophysics Data System (ADS)
Erlandsson, Kjell; Liljeroth, Maria; Atkinson, David; Arridge, Simon; Ourselin, Sebastien; Hutton, Brian F.
2016-10-01
Kinetic analysis can be applied both to dynamic PET and dynamic contrast enhanced (DCE) MRI data. We have investigated the potential of MRI-constrained PET kinetic modeling using simulated [ 18F]2-FDG data for skeletal muscle. The volume of distribution, Ve, for the extra-vascular extra-cellular space (EES) is the link between the two models: It can be estimated by DCE-MRI, and then used to reduce the number of parameters to estimate in the PET model. We used a 3 tissue-compartment model with 5 rate constants (3TC5k), in order to distinguish between EES and the intra-cellular space (ICS). Time-activity curves were generated by simulation using the 3TC5k model for 3 different Ve values under basal and insulin stimulated conditions. Noise was added and the data were fitted with the 2TC3k model and with the 3TC5k model with and without Ve constraint. One hundred noise-realisations were generated at 4 different noise-levels. The results showed reductions in bias and variance with Ve constraint in the 3TC5k model. We calculated the parameter k3", representing the combined effect of glucose transport across the cellular membrane and phosphorylation, as an extra outcome measure. For k3", the average coefficient of variation was reduced from 52% to 9.7%, while for k3 in the standard 2TC3k model it was 3.4%. The accuracy of the parameters estimated with our new modeling approach depends on the accuracy of the assumed Ve value. In conclusion, we have shown that, by utilising information that could be obtained from DCE-MRI in the kinetic analysis of [ 18F]2-FDG-PET data, it is in principle possible to obtain better parameter estimates with a more complex model, which may provide additional information as compared to the standard model.
Model-based estimation for dynamic cardiac studies using ECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.
1994-06-01
In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less
USING TIME VARIANT VOLTAGE TO CALCULATE ENERGY CONSUMPTION AND POWER USE OF BUILDING SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhmalbaf, Atefe; Augenbroe , Godfried
2015-12-09
Buildings are the main consumers of electricity across the world. However, in the research and studies related to building performance assessment, the focus has been on evaluating the energy efficiency of buildings whereas the instantaneous power efficiency has been overlooked as an important aspect of total energy consumption. As a result, we never developed adequate models that capture both thermal and electrical characteristics (e.g., voltage) of building systems to assess the impact of variations in the power system and emerging technologies of the smart grid on buildings energy and power performance and vice versa. This paper argues that the powermore » performance of buildings as a function of electrical parameters should be evaluated in addition to systems’ mechanical and thermal behavior. The main advantage of capturing electrical behavior of building load is to better understand instantaneous power consumption and more importantly to control it. Voltage is one of the electrical parameters that can be used to describe load. Hence, voltage dependent power models are constructed in this work and they are coupled with existing thermal energy models. Lack of models that describe electrical behavior of systems also adds to the uncertainty of energy consumption calculations carried out in building energy simulation tools such as EnergyPlus, a common building energy modeling and simulation tool. To integrate voltage-dependent power models with thermal models, the thermal cycle (operation mode) of each system was fed into the voltage-based electrical model. Energy consumption of systems used in this study were simulated using EnergyPlus. Simulated results were then compared with estimated and measured power data. The mean square error (MSE) between simulated, estimated, and measured values were calculated. Results indicate that estimated power has lower MSE when compared with measured data than simulated results. Results discussed in this paper will illustrate the significance of enhancing building energy models with electrical characteristics. This would support different studies such as those related to modernization of the power system that require micro scale building-grid interaction, evaluating building energy efficiency with power efficiency considerations, and also design and control decisions that rely on accuracy of building energy simulation results.« less
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
Logue, Jennifer M; Klepeis, Neil E; Lobscheid, Agnes B; Singer, Brett C
2014-01-01
Residential natural gas cooking burners (NGCBs) can emit substantial quantities of pollutants, and they are typically used without venting range hoods. We quantified pollutant concentrations and occupant exposures resulting from NGCB use in California homes. A mass-balance model was applied to estimate time-dependent pollutant concentrations throughout homes in Southern California and the exposure concentrations experienced by individual occupants. We estimated nitrogen dioxide (NO2), carbon monoxide (CO), and formaldehyde (HCHO) concentrations for 1 week each in summer and winter for a representative sample of Southern California homes. The model simulated pollutant emissions from NGCBs as well as NO2 and CO entry from outdoors, dilution throughout the home, and removal by ventilation and deposition. Residence characteristics and outdoor concentrations of NO2 and CO were obtained from available databases. We inferred ventilation rates, occupancy patterns, and burner use from household characteristics. We also explored proximity to the burner(s) and the benefits of using venting range hoods. Replicate model executions using independently generated sets of stochastic variable values yielded estimated pollutant concentration distributions with geometric means varying by <10%. The simulation model estimated that-in homes using NGCBs without coincident use of venting range hoods-62%, 9%, and 53% of occupants are routinely exposed to NO2, CO, and HCHO levels that exceed acute health-based standards and guidelines. NGCB use increased the sample median of the highest simulated 1-hr indoor concentrations by 100, 3,000, and 20 ppb for NO2, CO, and HCHO, respectively. Reducing pollutant exposures from NGCBs should be a public health priority. Simulation results suggest that regular use of even moderately effective venting range hoods would dramatically reduce the percentage of homes in which concentrations exceed health-based standards.
PLYMAP : a computer simulation model of the rotary peeled softwood plywood manufacturing process
Henry Spelter
1990-01-01
This report documents a simulation model of the plywood manufacturing process. Its purpose is to enable a user to make quick estimates of the economic impact of a particular process change within a mill. The program was designed to simulate the processing of plywood within a relatively simplified mill design. Within that limitation, however, it allows a wide range of...
Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M
2017-03-01
This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA
Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe
2015-01-01
Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674
Bender, David A.; Asher, William E.; Zogorski, John S.
2003-01-01
This report documents LakeVOC, a model to estimate volatile organic compound (VOC) concentrations in lakes and reservoirs. LakeVOC represents the lake or reservoir as a two-layer system and estimates VOC concentrations in both the epilimnion and hypolimnion. The air-water flux of a VOC is characterized in LakeVOC in terms of the two-film model of air-water exchange. LakeVOC solves the system of coupled differential equations for the VOC concentration in the epilimnion, the VOC concentration in the hypolimnion, the total mass of the VOC in the lake, the volume of the epilimnion, and the volume of the hypolimnion. A series of nine simulations were conducted to verify LakeVOC representation of mixing, dilution, and gas exchange characteristics in a hypothetical lake, and two additional estimates of lake volume and MTBE concentrations were done in an actual reservoir under environmental conditions. These 11 simulations showed that LakeVOC correctly handled mixing, dilution, and gas exchange. The model also adequately estimated VOC concentrations within the epilimnion in an actual reservoir with daily input parameters. As the parameter-input time scale increased (from daily to weekly to monthly, for example), the differences between the measured-averaged concentrations and the model-estimated concentrations generally increased, especially for the hypolimnion. This may be because as the time scale is increased from daily to weekly to monthly, the averaging of model inputs may cause a loss of detail in the model estimates.
International Meeting on Simulation in Healthcare
2010-02-01
wounds, burns, and injury . Participants will create reusable moulage items using realistic gel effects materials—designed to work seamlessly with...simulations of injuries and clinical encounters. Such technology provides extremely high levels of perceived realism and encourages suspension of disbelief...trace. The model gives an estimate of the cerebral flow reduction that occurs during early decelerations, including an estimate for vessel diameter
Impact of geoengineered aerosols on the troposphere and stratosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tilmes, S.; Garcia, Rolando R.; Kinnison, Douglas E.
2009-06-27
A coupled chemistry climate model, the Whole Atmosphere Community Climate Model was used to perform a transient climate simulation to quantify the impact of geoengineered aerosols on atmospheric processes. In contrast to previous model studies, the impact on stratospheric chemistry, including heterogeneous chemistry in the polar regions, is considered in this simulation. In the geoengineering simulation, a constant stratospheric distribution of volcanic-sized, liquid sulfate aerosols is imposed in the period 2020–2050, corresponding to an injection of 2 Tg S/a. The aerosol cools the troposphere compared to a baseline simulation. Assuming an Intergovernmental Panel on Climate Change A1B emission scenario, globalmore » warming is delayed by about 40 years in the troposphere with respect to the baseline scenario. Large local changes of precipitation and temperatures may occur as a result of geoengineering. Comparison with simulations carried out with the Community Atmosphere Model indicates the importance of stratospheric processes for estimating the impact of stratospheric aerosols on the Earth’s climate. Changes in stratospheric dynamics and chemistry, especially faster heterogeneous reactions, reduce the recovery of the ozone layer in middle and high latitudes for the Southern Hemisphere. In the geoengineering case, the recovery of the Antarctic ozone hole is delayed by about 30 years on the basis of this model simulation. For the Northern Hemisphere, a onefold to twofold increase of the chemical ozone depletion occurs owing to a simulated stronger polar vortex and colder temperatures compared to the baseline simulation, in agreement with observational estimates.« less
Simulation Model for Scenario Optimization of the Ready-Mix Concrete Delivery Problem
NASA Astrophysics Data System (ADS)
Galić, Mario; Kraus, Ivan
2016-12-01
This paper introduces a discrete simulation model for solving routing and network material flow problems in construction projects. Before the description of the model a detailed literature review is provided. The model is verified using a case study of solving the ready-mix concrete network flow and routing problem in metropolitan area in Croatia. Within this study real-time input parameters were taken into account. Simulation model is structured in Enterprise Dynamics simulation software and Microsoft Excel linked with Google Maps. The model is dynamic, easily managed and adjustable, but also provides good estimation for minimization of costs and realization time in solving discrete routing and material network flow problems.
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, He
2016-11-20
Angular velocity information is a requisite for a spacecraft guidance, navigation, and control system. In this paper, an approach for angular velocity estimation based merely on star vector measurement with an improved current statistical model Kalman filter is proposed. High-precision angular velocity estimation can be achieved under dynamic conditions. The amount of calculation is also reduced compared to a Kalman filter. Different trajectories are simulated to test this approach, and experiments with real starry sky observation are implemented for further confirmation. The estimation accuracy is proved to be better than 10-4 rad/s under various conditions. Both the simulation and the experiment demonstrate that the described approach is effective and shows an excellent performance under both static and dynamic conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip
2015-08-08
Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less
Compensating for estimation smoothing in kriging
Olea, R.A.; Pawlowsky, Vera
1996-01-01
Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator-compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-fr2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well - better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging. ?? 1996 International Association for Mathematical Geology.
Battery Calendar Life Estimator Manual Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2012-10-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Battery Life Estimator Manual Linear Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2009-08-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
An accurate behavioral model for single-photon avalanche diode statistical performance simulation
NASA Astrophysics Data System (ADS)
Xu, Yue; Zhao, Tingchen; Li, Ding
2018-01-01
An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.
NASA Astrophysics Data System (ADS)
Swenson, S. C.; Lawrence, D. M.
2017-12-01
Partitioning the vertically integrated water storage variations estimated from GRACE satellite data into the components of which it is comprised requires independent information. Land surface models, which simulate the transfer and storage of moisture and energy at the land surface, are often used to estimate water storage variability of snow, surface water, and soil moisture. To obtain an estimate of changes in groundwater, the estimates of these storage components are removed from GRACE data. Biases in the modeled water storage components are therefore present in the residual groundwater estimate. In this study, we examine how soil moisture variability, estimated using the Community Land Model (CLM), depends on the vertical structure of the model. We then explore the implications of this uncertainty in the context of estimating groundwater variations using GRACE data.
NASA Astrophysics Data System (ADS)
Badawy, Bakr; Polavarapu, Saroja; Jones, Dylan B. A.; Deng, Feng; Neish, Michael; Melton, Joe R.; Nassar, Ray; Arora, Vivek K.
2018-02-01
The Canadian Land Surface Scheme and the Canadian Terrestrial Ecosystem Model (CLASS-CTEM) together form the land surface component in the family of Canadian Earth system models (CanESMs). Here, CLASS-CTEM is coupled to Environment and Climate Change Canada (ECCC)'s weather and greenhouse gas forecast model (GEM-MACH-GHG) to consistently model atmosphere-land exchange of CO2. The coupling between the land and the atmospheric transport model ensures consistency between meteorological forcing of CO2 fluxes and CO2 transport. The procedure used to spin up carbon pools for CLASS-CTEM for multi-decadal simulations needed to be significantly altered to deal with the limited availability of consistent meteorological information from a constantly changing operational environment in the GEM-MACH-GHG model. Despite the limitations in the spin-up procedure, the simulated fluxes obtained by driving the CLASS-CTEM model with meteorological forcing from GEM-MACH-GHG were comparable to those obtained from CLASS-CTEM when it is driven with standard meteorological forcing from the Climate Research Unit (CRU) combined with reanalysis fields from the National Centers for Environmental Prediction (NCEP) to form CRU-NCEP dataset. This is due to the similarity of the two meteorological datasets in terms of temperature and radiation. However, notable discrepancies in the seasonal variation and spatial patterns of precipitation estimates, especially in the tropics, were reflected in the estimated carbon fluxes, as they significantly affected the magnitude of the vegetation productivity and, to a lesser extent, the seasonal variations in carbon fluxes. Nevertheless, the simulated fluxes based on the meteorological forcing from the GEM-MACH-GHG model are consistent to some extent with other estimates from bottom-up or top-down approaches. Indeed, when simulated fluxes obtained by driving the CLASS-CTEM model with meteorological data from the GEM-MACH-GHG model are used as prior estimates for an atmospheric CO2 inversion analysis using the adjoint of the GEOS-Chem model, the retrieved CO2 flux estimates are comparable to those obtained from other systems in terms of the global budget and the total flux estimates for the northern extratropical regions, which have good observational coverage. In data-poor regions, as expected, differences in the retrieved fluxes due to the prior fluxes become apparent. Coupling CLASS-CTEM into the Environment Canada Carbon Assimilation System (EC-CAS) is considered an important step toward understanding how meteorological uncertainties affect both CO2 flux estimates and modeled atmospheric transport. Ultimately, such an approach will provide more direct feedback to the CLASS-CTEM developers and thus help to improve the performance of CLASS-CTEM by identifying the model limitations based on atmospheric constraints.
2011-01-01
Background Molecular marker information is a common source to draw inferences about the relationship between genetic and phenotypic variation. Genetic effects are often modelled as additively acting marker allele effects. The true mode of biological action can, of course, be different from this plain assumption. One possibility to better understand the genetic architecture of complex traits is to include intra-locus (dominance) and inter-locus (epistasis) interaction of alleles as well as the additive genetic effects when fitting a model to a trait. Several Bayesian MCMC approaches exist for the genome-wide estimation of genetic effects with high accuracy of genetic value prediction. Including pairwise interaction for thousands of loci would probably go beyond the scope of such a sampling algorithm because then millions of effects are to be estimated simultaneously leading to months of computation time. Alternative solving strategies are required when epistasis is studied. Methods We extended a fast Bayesian method (fBayesB), which was previously proposed for a purely additive model, to include non-additive effects. The fBayesB approach was used to estimate genetic effects on the basis of simulated datasets. Different scenarios were simulated to study the loss of accuracy of prediction, if epistatic effects were not simulated but modelled and vice versa. Results If 23 QTL were simulated to cause additive and dominance effects, both fBayesB and a conventional MCMC sampler BayesB yielded similar results in terms of accuracy of genetic value prediction and bias of variance component estimation based on a model including additive and dominance effects. Applying fBayesB to data with epistasis, accuracy could be improved by 5% when all pairwise interactions were modelled as well. The accuracy decreased more than 20% if genetic variation was spread over 230 QTL. In this scenario, accuracy based on modelling only additive and dominance effects was generally superior to that of the complex model including epistatic effects. Conclusions This simulation study showed that the fBayesB approach is convenient for genetic value prediction. Jointly estimating additive and non-additive effects (especially dominance) has reasonable impact on the accuracy of prediction and the proportion of genetic variation assigned to the additive genetic source. PMID:21867519
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
USDA-ARS?s Scientific Manuscript database
Cumulative nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. This study used an agroecosystems simulation model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2...
USDA-ARS?s Scientific Manuscript database
Cover crops influence soil nitrogen (N) mineralization-immobilization-turnover cycles (MIT), thus influencing N availability to a subsequent crop. Dynamic simulation models of the soil/crop system, if properly calibrated and tested, can simulate carbon (C) and N dynamics of a terminated cover crop a...
Estimation in a discrete tail rate family of recapture sampling models
NASA Technical Reports Server (NTRS)
Gupta, Rajan; Lee, Larry D.
1990-01-01
In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.
NASA Astrophysics Data System (ADS)
Rackow, Thomas; Wesche, Christine; Timmermann, Ralph; Hellmer, Hartmut H.; Juricke, Stephan; Jung, Thomas
2017-04-01
We present a simulation of Antarctic iceberg drift and melting that includes small (<2.2 km), medium-sized, and giant tabular icebergs with lengths of more than 10km. The model is initialized with a realistic size distribution obtained from satellite observations. Our study highlights the necessity to account for larger and giant icebergs in order to obtain accurate melt climatologies. Taking iceberg modeling a step further, we simulate drift and melting using iceberg-draft averaged ocean currents, temperature, and salinity. A new basal melting scheme, originally applied in ice shelf melting studies, uses in situ temperature, salinity, and relative velocities at an iceberg's keel. The climatology estimates of Antarctic iceberg melting based on simulations of small, 'small-to-medium'-sized, and small-to-giant icebergs (including icebergs > 10km) exhibit differential characteristics: successive inclusion of larger icebergs leads to a reduced seasonality of the iceberg meltwater flux and a shift of the mass input to the area north of 58°S, while less meltwater is released into the coastal areas. This suggests that estimates of meltwater input solely based on the simulation of small icebergs introduce a systematic meridional bias; they underestimate the northward mass transport and are, thus, closer to the rather crude treatment of iceberg melting as coastal runoff in models without an interactive iceberg model. Future ocean simulations will benefit from the improved meridional distribution of iceberg melt, especially in climate change scenarios where the impact of iceberg melt is likely to increase due to increased calving from the Antarctic ice sheet.
NASA Astrophysics Data System (ADS)
Tian, Siyuan; Tregoning, Paul; Renzullo, Luigi J.; van Dijk, Albert I. J. M.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.; Allgeyer, Sébastien
2017-03-01
The accuracy of global water balance estimates is limited by the lack of observations at large scale and the uncertainties of model simulations. Global retrievals of terrestrial water storage (TWS) change and soil moisture (SM) from satellites provide an opportunity to improve model estimates through data assimilation. However, combining these two data sets is challenging due to the disparity in temporal and spatial resolution at both vertical and horizontal scale. For the first time, TWS observations from the Gravity Recovery and Climate Experiment (GRACE) and near-surface SM observations from the Soil Moisture and Ocean Salinity (SMOS) were jointly assimilated into a water balance model using the Ensemble Kalman Smoother from January 2010 to December 2013 for the Australian continent. The performance of joint assimilation was assessed against open-loop model simulations and the assimilation of either GRACE TWS anomalies or SMOS SM alone. The SMOS-only assimilation improved SM estimates but reduced the accuracy of groundwater and TWS estimates. The GRACE-only assimilation improved groundwater estimates but did not always produce accurate estimates of SM. The joint assimilation typically led to more accurate water storage profile estimates with improved surface SM, root-zone SM, and groundwater estimates against in situ observations. The assimilation successfully downscaled GRACE-derived integrated water storage horizontally and vertically into individual water stores at the same spatial scale as the model and SMOS, and partitioned monthly averaged TWS into daily estimates. These results demonstrate that satellite TWS and SM measurements can be jointly assimilated to produce improved water balance component estimates.
Investigation of flow and transport processes at the MADE site using ensemble Kalman filter
Liu, Gaisheng; Chen, Y.; Zhang, Dongxiao
2008-01-01
In this work the ensemble Kalman filter (EnKF) is applied to investigate the flow and transport processes at the macro-dispersion experiment (MADE) site in Columbus, MS. The EnKF is a sequential data assimilation approach that adjusts the unknown model parameter values based on the observed data with time. The classic advection-dispersion (AD) and the dual-domain mass transfer (DDMT) models are employed to analyze the tritium plume during the second MADE tracer experiment. The hydraulic conductivity (K), longitudinal dispersivity in the AD model, and mass transfer rate coefficient and mobile porosity ratio in the DDMT model, are estimated in this investigation. Because of its sequential feature, the EnKF allows for the temporal scaling of transport parameters during the tritium concentration analysis. Inverse simulation results indicate that for the AD model to reproduce the extensive spatial spreading of the tritium observed in the field, the K in the downgradient area needs to be increased significantly. The estimated K in the AD model becomes an order of magnitude higher than the in situ flowmeter measurements over a large portion of media. On the other hand, the DDMT model gives an estimation of K that is much more comparable with the flowmeter values. In addition, the simulated concentrations by the DDMT model show a better agreement with the observed values. The root mean square (RMS) between the observed and simulated tritium plumes is 0.77 for the AD model and 0.45 for the DDMT model at 328 days. Unlike the AD model, which gives inconsistent K estimates at different times, the DDMT model is able to invert the K values that consistently reproduce the observed tritium concentrations through all times. ?? 2008 Elsevier Ltd. All rights reserved.
MARS Science Laboratory Post-Landing Location Estimation Using Post2 Trajectory Simulation
NASA Technical Reports Server (NTRS)
Davis, J. L.; Shidner, Jeremy D.; Way, David W.
2013-01-01
The Mars Science Laboratory (MSL) Curiosity rover landed safely on Mars August 5th, 2012 at 10:32 PDT, Earth Received Time. Immediately following touchdown confirmation, best estimates of position were calculated to assist in determining official MSL locations during entry, descent and landing (EDL). Additionally, estimated balance mass impact locations were provided and used to assess how predicted locations compared to actual locations. For MSL, the Program to Optimize Simulated Trajectories II (POST2) was the primary trajectory simulation tool used to predict and assess EDL performance from cruise stage separation through rover touchdown and descent stage impact. This POST2 simulation was used during MSL operations for EDL trajectory analyses in support of maneuver decisions and imaging MSL during EDL. This paper presents the simulation methodology used and results of pre/post-landing MSL location estimates and associated imagery from Mars Reconnaissance Orbiter s (MRO) High Resolution Imaging Science Experiment (HiRISE) camera. To generate these estimates, the MSL POST2 simulation nominal and Monte Carlo data, flight telemetry from onboard navigation, relay orbiter positions from MRO and Mars Odyssey and HiRISE generated digital elevation models (DEM) were utilized. A comparison of predicted rover and balance mass location estimations against actual locations are also presented.
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing
2015-11-21
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu
2015-01-01
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the Neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right (LR), anterior-posterior (AP), and superior-inferior (SI) directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. PMID:26531324
Joseph K. O. Amoah; Devendra M. Amatya; Soronnadi Nnaji
2012-01-01
Hydrologic models often require correct estimates of surface macro-depressional storage to accurately simulate rainfallârunoff processes. Traditionally, depression storage is determined through model calibration or lumped with soil storage components or on an ad hoc basis. This paper investigates a holistic approach for estimating surface depressional storage capacity...
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
An Extension of Least Squares Estimation of IRT Linking Coefficients for the Graded Response Model
ERIC Educational Resources Information Center
Kim, Seonghoon
2010-01-01
The three types (generalized, unweighted, and weighted) of least squares methods, proposed by Ogasawara, for estimating item response theory (IRT) linking coefficients under dichotomous models are extended to the graded response model. A simulation study was conducted to confirm the accuracy of the extended formulas, and a real data study was…
NASA Technical Reports Server (NTRS)
Blankenship, Clay B.; Crosson, William L.; Case, Jonathan L.; Hale, Robert
2010-01-01
Improve simulations of soil moisture/temperature, and consequently boundary layer states and processes, by assimilating AMSR-E soil moisture estimates into a coupled land surface-mesoscale model Provide a new land surface model as an option in the Land Information System (LIS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, A; Bostani, M; McMillan, K
Purpose: The purpose of this work is to estimate effective and lung doses from a low-dose lung cancer screening CT protocol using Tube Current Modulation (TCM) across patient models of different sizes. Methods: Monte Carlo simulation methods were used to estimate effective and lung doses from a low-dose lung cancer screening protocol for a 64-slice CT (Sensation 64, Siemens Healthcare) that used TCM. Scanning parameters were from the AAPM protocols. Ten GSF voxelized patient models were used and had all radiosensitive organs identified to facilitate estimating both organ and effective doses. Predicted TCM schemes for each patient model were generatedmore » using a validated method wherein tissue attenuation characteristics and scanner limitations were used to determine the TCM output as a function of table position and source angle. The water equivalent diameter (WED) was determined by estimating the attenuation at the center of the scan volume for each patient model. Monte Carlo simulations were performed using the unique TCM scheme for each patient model. Lung doses were tallied and effective doses were estimated using ICRP 103 tissue weighting factors. Effective and lung dose values were normalized by scanspecific 32 cm CTDIvol values based upon the average tube current across the entire simulated scan. Absolute and normalized doses were reported as a function of WED for each patient. Results: For all ten patients modeled, the effective dose using TCM protocols was below 1.5 mSv. Smaller sized patient models experienced lower absolute doses compared to larger sized patients. Normalized effective and lung doses showed some dependence on patient size (R2 = 0.77 and 0.78, respectively). Conclusion: Effective doses for a low-dose lung screening protocol using TCM were below 1.5 mSv for all patient models used in this study. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.
2007-01-01
Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.