Sample records for average model parameter

  1. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  2. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  3. Parameter regionalisation methods for a semi-distributed rainfall-runoff model: application to a Northern Apennine region

    NASA Astrophysics Data System (ADS)

    Neri, Mattia; Toth, Elena

    2017-04-01

    The study presents the implementation of different regionalisation approaches for the transfer of model parameters from similar and/or neighbouring gauged basin to an ungauged catchment, and in particular it uses a semi-distributed continuously-simulating conceptual rainfall-runoff model for simulating daily streamflows. The case study refers to a set of Apennine catchments (in the Emilia-Romagna region, Italy), that, given the spatial proximity, are assumed to belong to the same hydrologically homogeneous region and are used, alternatively, as donors and regionalised basins. The model is a semi-distributed version of the HBV model (TUWien model) in which the catchment is divided in zones of different altitude that contribute separately to the total outlet flow. The model includes a snow module, whose application in the Apennine area has been, so far, very limited, even if snow accumulation and melting phenomena do have an important role in the study basins. Two methods, both widely applied in the recent literature, are applied for regionalising the model: i) "parameters averaging", where each parameter is obtained as a weighted mean of the parameters obtained, through calibration, on the donor catchments ii) "output averaging", where the model is run over the ungauged basin using the entire set of parameters of each donor basin and the simulated outputs are then averaged. In the first approach, the parameters are regionalised independently from each other, in the second one, instead, the correlation among the parameters is maintained. Since the model is a semi-distributed one, where each elevation zone contributes separately, the study proposes to test also a modified version of the second approach ("output averaging"), where each zone is considered as an autonomous entity, whose parameters are transposed to the ungauged sub-basin corresponding to the same elevation zone. The study explores also the choice of the weights to be used for averaging the parameters (in the "parameters averaging" approach) or for averaging the simulated streamflow (in the "output averaging" approach): in particular, weights are estimated as a function of the similarity/distance of the ungauged basin/zone to the donors, on the basis of a set of geo-morphological catchment descriptors. The predictive accuracy of the different regionalisation methods is finally assessed by jack-knife cross-validation against the observed daily runoff for all the study catchments.

  4. Application of Bayesian model averaging to measurements of the primordial power spectrum

    NASA Astrophysics Data System (ADS)

    Parkinson, David; Liddle, Andrew R.

    2010-11-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940

  5. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    PubMed

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  7. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  8. Climate modeling for Yamal territory using supercomputer atmospheric circulation model ECHAM5-wiso

    NASA Astrophysics Data System (ADS)

    Denisova, N. Y.; Gribanov, K. G.; Werner, M.; Zakharov, V. I.

    2015-11-01

    Dependences of monthly means of regional averages of model atmospheric parameters on initial and boundary condition remoteness in the past are the subject of the study. We used atmospheric general circulation model ECHAM5-wiso for simulation of monthly means of regional averages of climate parameters for Yamal region and different periods of premodeling. Time interval was varied from several months to 12 years. We present dependences of model monthly means of regional averages of surface temperature, 2 m air temperature and humidity for December of 2000 on duration of premodeling. Comparison of these results with reanalysis data showed that best coincidence with true parameters could be reached if duration of pre-modelling is approximately 10 years.

  9. Parameter interdependence and uncertainty induced by lumping in a hydrologic model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark R.; Doherty, John

    2007-05-01

    Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.

  10. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  11. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  12. Estimating effective soil properties of heterogeneous areas for modeling infiltration and redistribution

    USDA-ARS?s Scientific Manuscript database

    Field scale water infiltration and soil-water and solute transport models require spatially-averaged “effective” soil hydraulic parameters to represent the average flux and storage. The values of these effective parameters vary for different conditions, processes, and component soils in a field. For...

  13. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  14. Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.

    2008-01-01

    Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.

  15. Nonlinear ARMA models for the D(st) index and their physical interpretation

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D.; Klimas, A. J.; Baker, D. N.

    1996-01-01

    Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.

  16. Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.

    2016-12-01

    The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.

  17. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less

  18. An impact analysis of forecasting methods and forecasting parameters on bullwhip effect

    NASA Astrophysics Data System (ADS)

    Silitonga, R. Y. H.; Jelly, N.

    2018-04-01

    Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.

  19. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  20. Using a GIS to link digital spatial data and the precipitation-runoff modeling system, Gunnison River Basin, Colorado

    USGS Publications Warehouse

    Battaglin, William A.; Kuhn, Gerhard; Parker, Randolph S.

    1993-01-01

    The U.S. Geological Survey Precipitation-Runoff Modeling System, a modular, distributed-parameter, watershed-modeling system, is being applied to 20 smaller watersheds within the Gunnison River basin. The model is used to derive a daily water balance for subareas in a watershed, ultimately producing simulated streamflows that can be input into routing and accounting models used to assess downstream water availability under current conditions, and to assess the sensitivity of water resources in the basin to alterations in climate. A geographic information system (GIS) is used to automate a method for extracting physically based hydrologic response unit (HRU) distributed parameter values from digital data sources, and for the placement of those estimates into GIS spatial datalayers. The HRU parameters extracted are: area, mean elevation, average land-surface slope, predominant aspect, predominant land-cover type, predominant soil type, average total soil water-holding capacity, and average water-holding capacity of the root zone.

  1. Channel Characterization for Free-Space Optical Communications

    DTIC Science & Technology

    2012-07-01

    parameters. From the path- average parameters, a 2nC profile model, called the HAP model, was constructed so that the entire channel from air to ground...SR), both of which are required to estimate the Power in the Bucket (PIB) and Power in the Fiber (PIF) associated with the FOENEX data beam. UCF was...of the path-average values of 2nC , the resulting HAP 2nC profile model led to values of ground level 2 nC that compared very well with actual

  2. Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2018-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".

  3. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  4. Elucidating fluctuating diffusivity in center-of-mass motion of polymer models with time-averaged mean-square-displacement tensor

    NASA Astrophysics Data System (ADS)

    Miyaguchi, Tomoshige

    2017-10-01

    There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here a method is developed to elucidate this fluctuating diffusivity from trajectory data. Time-averaged mean-square displacement (MSD), a common tool in single-particle-tracking (SPT) experiments, is generalized to a second-order tensor with which both magnitude and orientation fluctuations of the diffusivity can be clearly detected. This method is used to analyze the center-of-mass motion of four fundamental polymer models: the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer. It is found that these models exhibit distinctly different types of magnitude and orientation fluctuations of diffusivity. This is an advantage of the present method over previous ones, such as the ergodicity-breaking parameter and a non-Gaussian parameter, because with either of these parameters it is difficult to distinguish the dynamics of the four polymer models. Also, the present method of a time-averaged MSD tensor could be used to analyze trajectory data obtained in SPT experiments.

  5. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  6. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    NASA Astrophysics Data System (ADS)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  7. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  8. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  9. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  10. Neuromusculoskeletal Model Calibration Significantly Affects Predicted Knee Contact Forces for Walking

    PubMed Central

    Serrancolí, Gil; Kinney, Allison L.; Fregly, Benjamin J.; Font-Llagunes, Josep M.

    2016-01-01

    Though walking impairments are prevalent in society, clinical treatments are often ineffective at restoring lost function. For this reason, researchers have begun to explore the use of patient-specific computational walking models to develop more effective treatments. However, the accuracy with which models can predict internal body forces in muscles and across joints depends on how well relevant model parameter values can be calibrated for the patient. This study investigated how knowledge of internal knee contact forces affects calibration of neuromusculoskeletal model parameter values and subsequent prediction of internal knee contact and leg muscle forces during walking. Model calibration was performed using a novel two-level optimization procedure applied to six normal walking trials from the Fourth Grand Challenge Competition to Predict In Vivo Knee Loads. The outer-level optimization adjusted time-invariant model parameter values to minimize passive muscle forces, reserve actuator moments, and model parameter value changes with (Approach A) and without (Approach B) tracking of experimental knee contact forces. Using the current guess for model parameter values but no knee contact force information, the inner-level optimization predicted time-varying muscle activations that were close to experimental muscle synergy patterns and consistent with the experimental inverse dynamic loads (both approaches). For all the six gait trials, Approach A predicted knee contact forces with high accuracy for both compartments (average correlation coefficient r = 0.99 and root mean square error (RMSE) = 52.6 N medial; average r = 0.95 and RMSE = 56.6 N lateral). In contrast, Approach B overpredicted contact force magnitude for both compartments (average RMSE = 323 N medial and 348 N lateral) and poorly matched contact force shape for the lateral compartment (average r = 0.90 medial and −0.10 lateral). Approach B had statistically higher lateral muscle forces and lateral optimal muscle fiber lengths but lower medial, central, and lateral normalized muscle fiber lengths compared to Approach A. These findings suggest that poorly calibrated model parameter values may be a major factor limiting the ability of neuromusculoskeletal models to predict knee contact and leg muscle forces accurately for walking. PMID:27210105

  11. Constraints on Average Radial Anisotropy in the Lower Mantle

    NASA Astrophysics Data System (ADS)

    Trampert, J.; De Wit, R. W. L.; Kaeufl, P.; Valentine, A. P.

    2014-12-01

    Quantifying uncertainties in seismological models is challenging, yet ideally quality assessment is an integral part of the inverse method. We invert centre frequencies for spheroidal and toroidal modes for three parameters of average radial anisotropy, density and P- and S-wave velocities in the lower mantle. We adopt a Bayesian machine learning approach to extract the information on the earth model that is available in the normal mode data. The method is flexible and allows us to infer probability density functions (pdfs), which provide a quantitative description of our knowledge of the individual earth model parameters. The parameters describing shear- and P-wave anisotropy show little deviations from isotropy, but the intermediate parameter η carries robust information on negative anisotropy of ~1% below 1900 km depth. The mass density in the deep mantle (below 1900 km) shows clear positive deviations from existing models. Other parameters (P- and shear-wave velocities) are close to PREM. Our results require that the average mantle is about 150K colder than commonly assumed adiabats and consist of a mixture of about 60% perovskite and 40% ferropericlase containing 10-15% iron. The anisotropy favours a specific orientation of the two minerals. This observation has important consequences for the nature of mantle flow.

  12. Diffuse reflectance of TiO 2 pigmented paints: Spectral dependence of the average pathlength parameter and the forward scattering ratio

    NASA Astrophysics Data System (ADS)

    Vargas, William E.; Amador, Alvaro; Niklasson, Gunnar A.

    2006-05-01

    Diffuse reflectance spectra of paint coatings with different pigment concentrations, normally illuminated with unpolarized radiation, have been measured. A four-flux radiative transfer approach is used to model the diffuse reflectance of TiO2 (rutile) pigmented coatings through the solar spectral range. The spectral dependence of the average pathlength parameter and of the forward scattering ratio for diffuse radiation, are explicitly incorporated into this four-flux model from two novel approximations. The size distribution of the pigments has been taken into account to obtain the averages of the four-flux parameters: scattering and absorption cross sections, forward scattering ratios for collimated and isotropic diffuse radiation, and coefficients involved in the expansion of the single particle phase function in terms of Legendre polynomials.

  13. Bianchi Type-II String Cosmological Model with Magnetic Field in f ( R, T) Gravity

    NASA Astrophysics Data System (ADS)

    Sharma, N. K.; Singh, J. K.

    2014-09-01

    The spatially homogeneous and totally anisotropic Bianchi type-II cosmological solutions of massive strings have been investigated in the presence of the magnetic field in the framework of f( R, T) gravity proposed by Harko et al. (Phys Rev D 84:024020, 2011). With the help of special law of variation for Hubble's parameter proposed by Berman (Nuovo Cimento B 74:182, 1983) cosmological model is obtained in this theory. We consider f( R, T) model and investigate the modification R+ f( T) in Bianchi type-II cosmology with an appropriate choice of a function f( T)= μ T. We use the power law relation between average Hubble parameter H and average scale factor R to find the solution. The assumption of constant deceleration parameter leads to two models of universe, i.e. power law model and exponential model. Some physical and kinematical properties of the model are also discussed.

  14. Technical Note: Using experimentally determined proton spot scanning timing parameters to accurately model beam delivery time.

    PubMed

    Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin

    2017-10-01

    To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may provide guidance on how to effectively reduce BDT and may be used to identifying deteriorating machine performance. © 2017 American Association of Physicists in Medicine.

  15. Atmospheric mold spore counts in relation to meteorological parameters

    NASA Astrophysics Data System (ADS)

    Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.

    Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (P<0.02) with average daily temperature, relative humidity, and negatively correlated with precipitation. Alternaria and Epicoccum did not show increased predictability with weather variables. A mathematical model was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.

  16. A predictive parameter estimation approach for the thermodynamically constrained averaging theory applied to diffusion in porous media

    NASA Astrophysics Data System (ADS)

    Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.

    2017-12-01

    Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.

  17. Estimation of Filling and Afterload Conditions by Pump Intrinsic Parameters in a Pulsatile Total Artificial Heart.

    PubMed

    Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich

    2016-07-01

    A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  18. Constructing optimal ensemble projections for predictive environmental modelling in Northern Eurasia

    NASA Astrophysics Data System (ADS)

    Anisimov, Oleg; Kokorev, Vasily

    2013-04-01

    Large uncertainties in climate impact modelling are associated with the forcing climate data. This study is targeted at the evaluation of the quality of GCM-based climatic projections in the specific context of predictive environmental modelling in Northern Eurasia. To accomplish this task, we used the output from 36 CMIP5 GCMs from the IPCC AR-5 data base for the control period 1975-2005 and calculated several climatic characteristics and indexes that are most often used in the impact models, i.e. the summer warmth index, duration of the vegetation growth period, precipitation sums, dryness index, thawing degree-day sums, and the annual temperature amplitude. We used data from 744 weather stations in Russia and neighbouring countries to analyze the spatial patterns of modern climatic change and to delineate 17 large regions with coherent temperature changes in the past few decades. GSM results and observational data were averaged over the coherent regions and compared with each other. Ultimately, we evaluated the skills of individual models, ranked them in the context of regional impact modelling and identified top-end GCMs that "better than average" reproduce modern regional changes of the selected meteorological parameters and climatic indexes. Selected top-end GCMs were used to compose several ensembles, each combining results from the different number of models. Ensembles were ranked using the same algorithm and outliers eliminated. We then used data from top-end ensembles for the 2000-2100 period to construct the climatic projections that are likely to be "better than average" in predicting climatic parameters that govern the state of environment in Northern Eurasia. The ultimate conclusions of our study are the following. • High-end GCMs that demonstrate excellent skills in conventional atmospheric model intercomparison experiments are not necessarily the best in replicating climatic characteristics that govern the state of environment in Northern Eurasia, and independent model evaluation on regional level is necessary to identify "better than average" GCMs. • Each of the ensembles combining results from several "better than average" models replicate selected meteorological parameters and climatic indexes better than any single GCM. The ensemble skills are parameter-specific and depend on models it consists of. The best results are not necessarily those based on the ensemble comprised by all "better than average" models. • Comprehensive evaluation of climatic scenarios using specific criteria narrows the range of uncertainties in environmental projections.

  19. Height extrapolation of wind data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikhail, A.S.

    1982-11-01

    Hourly average data for a period of 1 year from three tall meteorological towers - the Erie tower in Colorado, the Goodnoe Hills tower in Washington and the WKY-TV tower in Oklahoma - were used to analyze the wind shear exponent variabiilty with various parameters such as thermal stability, anemometer level wind speed, projection height and surface roughness. Different proposed models for prediction of height variability of short-term average wind speeds were discussed. Other models that predict the height dependence of Weilbull distribution parameters were tested. The observed power law exponent for all three towers showed strong dependence on themore » anemometer level wind speed and stability (nighttime and daytime). It also exhibited a high degree of dependence on extrapolation height with respect to anemometer height. These dependences became less severe as the anemometer level wind speeds were increased due to the turbulent mixing of the atmospheric boundary layer. The three models used for Weibull distribution parameter extrapolation were he velocity-dependent power law model (Justus), the velocity, surface roughness, and height-dependent model (Mikhail) and the velocity and surface roughness-dependent model (NASA). The models projected the scale parameter C fairly accurately for the Goodnoe Hills and WKY-TV towers and were less accurate for the Erie tower. However, all models overestimated the C value. The maximum error for the Mikhail model was less than 2% for Goodnoe Hills, 6% for WKY-TV and 28% for Erie. The error associated with the prediction of the shape factor (K) was similar for the NASA, Mikhail and Justus models. It ranged from 20 to 25%. The effect of the misestimation of hub-height distribution parameters (C and K) on average power output is briefly discussed.« less

  20. Scale Dependence of Statistics of Spatially Averaged Rain Rate Seen in TOGA COARE Comparison with Predictions from a Stochastic Model

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, T. L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model allows one to compute the second moment statistics of space- and time-averaged rain rate which can be fitted to satellite or rain gauge data to determine the four model parameters appearing in the precipitation spectrum - an overall strength parameter, a characteristic length separating the long and short wavelength regimes and a characteristic relaxation time for decay of the autocorrelation of the instantaneous local rain rate and a certain 'fractal' power law exponent. For area-averaged instantaneous rain rate, this exponent governs the power law dependence of these statistics on the averaging length scale $L$ predicted by the model in the limit of small $L$. In particular, the variance of rain rate averaged over an $L \\times L$ area exhibits a power law singularity as $L \\rightarrow 0$. In the present work the model is used to investigate how the statistics of area-averaged rain rate over the tropical Western Pacific measured with ship borne radar during TOGA COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmospheric Response Experiment) and gridded on a 2 km grid depends on the size of the spatial averaging scale. Good agreement is found between the data and predictions from the model over a wide range of averaging length scales.

  1. [Evaluation of the influence of humidity and temperature on the drug stability by initial average rate experiment].

    PubMed

    He, Ning; Sun, Hechun; Dai, Miaomiao

    2014-05-01

    To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.

  2. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  3. Solute redistribution in dendritic solidification with diffusion in the solid

    NASA Technical Reports Server (NTRS)

    Ganesan, S.; Poirier, D. R.

    1989-01-01

    An investigation of solute redistribution during dendritic solidification with diffusion in the solid has been performed using numerical techniques. The extent of diffusion is characterized by the instantaneous and average diffusion parameters. These parameters are functions of the diffusion Fourier number, the partition ratio and the fraction solid. Numerical results are presented as an approximate model, which is used to predict the average diffusion parameter and calculate the composition of the interdendritic liquid during solidification.

  4. Excitation of the Earth's Chandler wobble by a turbulent oceanic double-gyre

    NASA Astrophysics Data System (ADS)

    Naghibi, S. E.; Jalali, M. A.; Karabasov, S. A.; Alam, M.-R.

    2017-04-01

    We develop a layer-averaged, multiple-scale spectral ocean model and show how an oceanic double-gyre can communicate with the Earth's Chandler wobble. The overall transfers of energy and angular momentum from the double-gyre to the Chandler wobble are used to calibrate the turbulence parameters of the layer-averaged model. Our model is tested against a multilayer quasi-geostrophic ocean model in turbulent regime, and base states used in parameter identification are obtained from mesoscale eddy resolving numerical simulations. The Chandler wobble excitation function obtained from the model predicts a small role of North Atlantic ocean region on the wobble dynamics as compared to all oceans, in agreement with the existing observations.

  5. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  6. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  7. Online quantitative analysis of multispectral images of human body tissues

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.

    2013-08-01

    A method is developed for online monitoring of structural and morphological parameters of biological tissues (haemoglobin concentration, degree of blood oxygenation, average diameter of capillaries and the parameter characterising the average size of tissue scatterers), which involves multispectral tissue imaging, image normalisation to one of its spectral layers and determination of unknown parameters based on their stable regression relation with the spectral characteristics of the normalised image. Regression is obtained by simulating numerically the diffuse reflectance spectrum of the tissue by the Monte Carlo method at a wide variation of model parameters. The correctness of the model calculations is confirmed by the good agreement with the experimental data. The error of the method is estimated under conditions of general variability of structural and morphological parameters of the tissue. The method developed is compared with the traditional methods of interpretation of multispectral images of biological tissues, based on the solution of the inverse problem for each pixel of the image in the approximation of different analytical models.

  8. Bayes factors and multimodel inference

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    Multimodel inference has two main themes: model selection, and model averaging. Model averaging is a means of making inference conditional on a model set, rather than on a selected model, allowing formal recognition of the uncertainty associated with model choice. The Bayesian paradigm provides a natural framework for model averaging, and provides a context for evaluation of the commonly used AIC weights. We review Bayesian multimodel inference, noting the importance of Bayes factors. Noting the sensitivity of Bayes factors to the choice of priors on parameters, we define and propose nonpreferential priors as offering a reasonable standard for objective multimodel inference.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortoleva, Peter J.

    Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.

  10. Determination of the turbulence integral model parameters for a case of a coolant angular flow in regular rod-bundle

    NASA Astrophysics Data System (ADS)

    Bayaskhalanov, M. V.; Vlasov, M. N.; Korsun, A. S.; Merinov, I. G.; Philippov, M. Ph

    2017-11-01

    Research results of “k-ε” turbulence integral model (TIM) parameters dependence on the angle of a coolant flow in regular smooth cylindrical rod-bundle are presented. TIM is intended for the definition of efficient impulse and heat transport coefficients in the averaged equations of a heat and mass transfer in the regular rod structures in an anisotropic porous media approximation. The TIM equations are received by volume-averaging of the “k-ε” turbulence model equations on periodic cell of rod-bundle. The water flow across rod-bundle under angles from 15 to 75 degrees was simulated by means of an ANSYS CFX code. Dependence of the TIM parameters on flow angle was as a result received.

  11. A flexible model of foraging by a honey bee colony: the effects of individual behaviour on foraging success.

    PubMed

    Cox, Melissa D; Myerscough, Mary R

    2003-07-21

    This paper develops and explores a model of foraging in honey bee colonies. The model may be applied to forage sources with various properties, and to colonies with different foraging-related parameters. In particular, we examine the effect of five foraging-related parameters on the foraging response and consequent nectar intake of a homogeneous colony. The parameters investigated affect different quantities critical to the foraging cycle--visit rate (affected by g), probability of dancing (mpd and bpd), duration of dancing (mcirc), or probability of abandonment (A). We show that one parameter, A, affects nectar intake in a nonlinear way. Further, we show that colonies with a midrange value of any foraging parameter perform better than the average of colonies with high- and low-range values, when profitable sources are available. Together these observations suggest that a heterogeneous colony, in which a range of parameter values are present, may perform better than a homogeneous colony. We modify the model to represent heterogeneous colonies and use it to show that the most important effect of heterogeneous foraging behaviour within the colony is to reduce the variance in the average quantity of nectar collected by heterogeneous colonies.

  12. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  13. Derivation and calibration of a gas metal arc welding (GMAW) dynamic droplet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reutzel, E.W.; Einerson, C.J.; Johnson, J.A.

    1996-12-31

    A rudimentary, existing dynamic model for droplet growth and detachment in gas metal arc welding (GMAW) was improved and calibrated to match experimental data. The model simulates droplets growing at the end of an imaginary spring. Mass is added to the drop as the electrode melts, the droplet grows, and the spring is displaced. Detachment occurs when one of two criteria is met, and the amount of mass that is detached is a function of the droplet velocity at the time of detachment. Improvements to the model include the addition of a second criterion for drop detachment, a more sophisticatedmore » model of the power supply and secondary electric circuit, and the incorporation of a variable electrode resistance. Relevant physical parameters in the model were adjusted during model calibration. The average current, droplet frequency, and parameter-space location of globular-to-streaming mode transition were used as criteria for tuning the model. The average current predicted by the calibrated model matched the experimental average current to within 5% over a wide range of operating conditions.« less

  14. On the Nature of SEM Estimates of ARMA Parameters.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  15. Individual Differences in a Positional Learning Task across the Adult Lifespan

    ERIC Educational Resources Information Center

    Rast, Philippe; Zimprich, Daniel

    2010-01-01

    This study aimed at modeling individual and average non-linear trajectories of positional learning using a structured latent growth curve approach. The model is based on an exponential function which encompasses three parameters: Initial performance, learning rate, and asymptotic performance. These learning parameters were compared in a positional…

  16. Drag coefficients for modeling flow through emergent vegetation in the Florida Everglades

    USGS Publications Warehouse

    Lee, J.K.; Roig, L.C.; Jenter, H.L.; Visser, H.M.

    2004-01-01

    Hydraulic data collected in a flume fitted with pans of sawgrass were analyzed to determine the vertically averaged drag coefficient as a function of vegetation characteristics. The drag coefficient is required for modeling flow through emergent vegetation at low Reynolds numbers in the Florida Everglades. Parameters of the vegetation, such as the stem population per unit bed area and the average stem/leaf width, were measured for five fixed vegetation layers. The vertically averaged vegetation parameters for each experiment were then computed by weighted average over the submerged portion of the vegetation. Only laminar flow through emergent vegetation was considered, because this is the dominant flow regime of the inland Everglades. A functional form for the vegetation drag coefficient was determined by linear regression of the logarithmic transforms of measured resistance force and Reynolds number. The coefficients of the drag coefficient function were then determined for the Everglades, using extensive flow and vegetation measurements taken in the field. The Everglades data show that the stem spacing and the Reynolds number are important parameters for the determination of vegetation drag coefficient. ?? 2004 Elsevier B.V. All rights reserved.

  17. Application of Time-series Model to Predict Groundwater Quality Parameters for Agriculture: (Plain Mehran Case Study)

    NASA Astrophysics Data System (ADS)

    Mehrdad Mirsanjari, Mir; Mohammadyari, Fatemeh

    2018-03-01

    Underground water is regarded as considerable water source which is mainly available in arid and semi arid with deficient surface water source. Forecasting of hydrological variables are suitable tools in water resources management. On the other hand, time series concepts is considered efficient means in forecasting process of water management. In this study the data including qualitative parameters (electrical conductivity and sodium adsorption ratio) of 17 underground water wells in Mehran Plain has been used to model the trend of parameters change over time. Using determined model, the qualitative parameters of groundwater is predicted for the next seven years. Data from 2003 to 2016 has been collected and were fitted by AR, MA, ARMA, ARIMA and SARIMA models. Afterward, the best model is determined using information criterion or Akaike (AIC) and correlation coefficient. After modeling parameters, the map of agricultural land use in 2016 and 2023 were generated and the changes between these years were studied. Based on the results, the average of predicted SAR (Sodium Adsorption Rate) in all wells in the year 2023 will increase compared to 2016. EC (Electrical Conductivity) average in the ninth and fifteenth holes and decreases in other wells will be increased. The results indicate that the quality of groundwater for Agriculture Plain Mehran will decline in seven years.

  18. Tracking Electroencephalographic Changes Using Distributions of Linear Models: Application to Propofol-Based Depth of Anesthesia Monitoring.

    PubMed

    Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2017-04-01

    Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.

  19. Analytical Computation of Effective Grid Parameters for the Finite-Difference Seismic Waveform Modeling With the PREM, IASP91, SP6, and AK135

    NASA Astrophysics Data System (ADS)

    Toyokuni, G.; Takenaka, H.

    2007-12-01

    We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.

  20. Modeling Patterns of Total Dissolved Solids Release from Central Appalachia, USA, Mine Spoils.

    PubMed

    Clark, Elyse V; Zipper, Carl E; Daniels, W Lee; Orndorff, Zenah W; Keefe, Matthew J

    2017-01-01

    Surface mining in the central Appalachian coalfields (USA) influences water quality because the interaction of infiltrated waters and O with freshly exposed mine spoils releases elevated levels of total dissolved solids (TDS) to streams. Modeling and predicting the short- and long-term TDS release potentials of mine spoils can aid in the management of current and future mining-influenced watersheds and landscapes. In this study, the specific conductance (SC, a proxy variable for TDS) patterns of 39 mine spoils during a sequence of 40 leaching events were modeled using a five-parameter nonlinear regression. Estimated parameter values were compared to six rapid spoil assessment techniques (RSATs) to assess predictive relationships between model parameters and RSATs. Spoil leachates reached maximum values, 1108 ± 161 μS cm on average, within the first three leaching events, then declined exponentially to a breakpoint at the 16th leaching event on average. After the breakpoint, SC release remained linear, with most spoil samples exhibiting declines in SC release with successive leaching events. The SC asymptote averaged 276 ± 25 μS cm. Only three samples had SCs >500 μS cm at the end of the 40 leaching events. Model parameters varied with mine spoil rock and weathering type, and RSATs were predictive of four model parameters. Unweathered samples released higher SCs throughout the leaching period relative to weathered samples, and rock type influenced the rate of SC release. The RSATs for SC, total S, and neutralization potential may best predict certain phases of mine spoil TDS release. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  1. Dependence of the average spatial and energy characteristics of the hadron-lepton cascade on the strong interaction parameters at superhigh energies

    NASA Technical Reports Server (NTRS)

    Boyadjian, N. G.; Dallakyan, P. Y.; Garyaka, A. P.; Mamidjanian, E. A.

    1985-01-01

    A method for calculating the average spatial and energy characteristics of hadron-lepton cascades in the atmosphere is described. The results of calculations for various strong interaction models of primary protons and nuclei are presented. The sensitivity of the experimentally observed extensive air showers (EAS) characteristics to variations of the elementary act parameters is analyzed.

  2. Inferring Spatial Variations of Microstructural Properties from Macroscopic Mechanical Response

    PubMed Central

    Liu, Tengxiao; Hall, Timothy J.; Barbone, Paul E.; Oberai, Assad A.

    2016-01-01

    Disease alters tissue microstructure, which in turn affects the macroscopic mechanical properties of tissue. In elasticity imaging, the macroscopic response is measured and is used to infer the spatial distribution of the elastic constitutive parameters. When an empirical constitutive model is used these parameters cannot be linked to the microstructure. However, when the constitutive model is derived from a microstructural representation of the material, it allows for the possibility of inferring the local averages of the spatial distribution of the microstructural parameters. This idea forms the basis of this study. In particular, we first derive a constitutive model by homogenizing the mechanical response of a network of elastic, tortuous fibers. Thereafter, we use this model in an inverse problem to determine the spatial distribution of the microstructural parameters. We solve the inverse problem as a constrained minimization problem, and develop efficient methods for solving it. We apply these methods to displacement fields obtained by deforming gelatin-agar co-gels, and determine the spatial distribution of agar concentration and fiber tortuosity, thereby demonstrating that it is possible to image local averages of microstructural parameters from macroscopic measurements of deformation. PMID:27655420

  3. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  4. Apparent cosmic acceleration from Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Dam, Lawrence H.; Heinesen, Asta; Wiltshire, David L.

    2017-11-01

    Parameters that quantify the acceleration of cosmic expansion are conventionally determined within the standard Friedmann-Lemaître-Robertson-Walker (FLRW) model, which fixes spatial curvature to be homogeneous. Generic averages of Einstein's equations in inhomogeneous cosmology lead to models with non-rigidly evolving average spatial curvature, and different parametrizations of apparent cosmic acceleration. The timescape cosmology is a viable example of such a model without dark energy. Using the largest available supernova data set, the JLA catalogue, we find that the timescape model fits the luminosity distance-redshift data with a likelihood that is statistically indistinguishable from the standard spatially flat Λ cold dark matter cosmology by Bayesian comparison. In the timescape case cosmic acceleration is non-zero but has a marginal amplitude, with best-fitting apparent deceleration parameter, q_{0}=-0.043^{+0.004}_{-0.000}. Systematic issues regarding standardization of supernova light curves are analysed. Cuts of data at the statistical homogeneity scale affect light-curve parameter fits independent of cosmology. A cosmological model dependence of empirical changes to the mean colour parameter is also found. Irrespective of which model ultimately fits better, we argue that as a competitive model with a non-FLRW expansion history, the timescape model may prove a useful diagnostic tool for disentangling selection effects and astrophysical systematics from the underlying expansion history.

  5. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  6. [The analysis of climatic and biological parameters for the pest spread risk modelling of the wood nematode species Bursaphelenchus spp. and Devibursaphelenchus teratospicularis (Rhabditida: Aphelenchoidea)].

    PubMed

    Ryss, A Y; Mokrousov, M V

    2014-01-01

    Based on the forest woody species wilt areassurvey in Nizhniy Novgorod region in August 2014, the possible factors of the pest spread risk modelling were analysed on six species of the genus Bursaphelenchus and Devibursaphelenchus teratospicularis using six parameters: plant host species, beetle vector species, average temperatures in July and January, annual precipitation. It was concluded that these parameters in the evaluated wilt spots correspond to climatic and biological data of the already published woody plants wilt records in Europe and Asia caused by the same nematode pest species. It was speculated that the annual precipitation of 600 mm and average July temperature of 25 degrees C or higher, are the critical combination that may be used to develop the predicative risk modelling in the forests' and parks' wilt monitoring.

  7. The pitch of short-duration fundamental frequency glissandos.

    PubMed

    d'Alessandro, C; Rosset, S; Rossi, J P

    1998-10-01

    Pitch perception for short-duration fundamental frequency (F0) glissandos was studied. In the first part, new measurements using the method of adjustment are reported. Stimuli were F0 glissandos centered at 220 Hz. The parameters under study were: F0 glissando extents (0, 0.8, 1.5, 3, 6, and 12 semitones, i.e., 0, 10.17, 18.74, 38.17, 76.63, and 155.56 Hz), F0 glissando durations (50, 100, 200, and 300 ms), F0 glissando directions (rising or falling), and the extremity of F0 glissandos matched (beginning or end). In the second part, the main results are discussed: (1) perception seems to correspond to an average of the frequencies present in the vicinity of the extremity matched; (2) the higher extremities of the glissando seem more important; (3) adjustments at the end are closer to the extremities than adjustments at the beginning. In the third part, numerical models accounting for the experimental data are proposed: a time-average model and a weighted time-average model. Optimal parameters for these models are derived. The weighted time-average model achieves a 94% accurate prediction rate for the experimental data. The numerical model is successful in predicting the pitch of short-duration F0 glissandos.

  8. Soil Erosion as a stochastic process

    NASA Astrophysics Data System (ADS)

    Casper, Markus C.

    2015-04-01

    The main tools to provide estimations concerning risk and amount of erosion are different types of soil erosion models: on the one hand, there are empirically based model concepts on the other hand there are more physically based or process based models. However, both types of models have substantial weak points. All empirical model concepts are only capable of providing rough estimates over larger temporal and spatial scales, they do not account for many driving factors that are in the scope of scenario related analysis. In addition, the physically based models contain important empirical parts and hence, the demand for universality and transferability is not given. As a common feature, we find, that all models rely on parameters and input variables, which are to certain, extend spatially and temporally averaged. A central question is whether the apparent heterogeneity of soil properties or the random nature of driving forces needs to be better considered in our modelling concepts. Traditionally, researchers have attempted to remove spatial and temporal variability through homogenization. However, homogenization has been achieved through physical manipulation of the system, or by statistical averaging procedures. The price for obtaining this homogenized (average) model concepts of soils and soil related processes has often been a failure to recognize the profound importance of heterogeneity in many of the properties and processes that we study. Especially soil infiltrability and the resistance (also called "critical shear stress" or "critical stream power") are the most important empirical factors of physically based erosion models. The erosion resistance is theoretically a substrate specific parameter, but in reality, the threshold where soil erosion begins is determined experimentally. The soil infiltrability is often calculated with empirical relationships (e.g. based on grain size distribution). Consequently, to better fit reality, this value needs to be corrected experimentally. To overcome this disadvantage of our actual models, soil erosion models are needed that are able to use stochastic directly variables and parameter distributions. There are only some minor approaches in this direction. The most advanced is the model "STOSEM" proposed by Sidorchuk in 2005. In this model, only a small part of the soil erosion processes is described, the aggregate detachment and the aggregate transport by flowing water. The concept is highly simplified, for example, many parameters are temporally invariant. Nevertheless, the main problem is that our existing measurements and experiments are not geared to provide stochastic parameters (e.g. as probability density functions); in the best case they deliver a statistical validation of the mean values. Again, we get effective parameters, spatially and temporally averaged. There is an urgent need for laboratory and field experiments on overland flow structure, raindrop effects and erosion rate, which deliver information on spatial and temporal structure of soil and surface properties and processes.

  9. Rapid determination of thermodynamic parameters from one-dimensional programmed-temperature gas chromatography for use in retention time prediction in comprehensive multidimensional chromatography.

    PubMed

    McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J

    2014-01-17

    A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.

    2014-01-01

    Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions.

  11. Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.

    2014-01-01

    Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions

  12. Probability Analysis of the Wave-Slamming Pressure Values of the Horizontal Deck with Elastic Support

    NASA Astrophysics Data System (ADS)

    Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao

    2018-06-01

    This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.

  13. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    PubMed

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  14. THE NuSTAR X-RAY SPECTRUM OF HERCULES X-1: A RADIATION-DOMINATED RADIATIVE SHOCK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolff, Michael T.; Wood, Kent S.; Becker, Peter A.

    2016-11-10

    We report on new spectral modeling of the accreting X-ray pulsar Hercules X-1. Our radiation-dominated radiative shock model is an implementation of the analytic work of Becker and Wolff on Comptonized accretion flows onto magnetic neutron stars. We obtain a good fit to the spin-phase-averaged 4–78 keV X-ray spectrum observed by the Nuclear Spectroscopic Telescope Array during a main-on phase of the Her X-1 35 day accretion disk precession period. This model allows us to estimate the accretion rate, the Comptonizing temperature of the radiating plasma, the radius of the magnetic polar cap, and the average scattering opacity parameters inmore » the accretion column. This is in contrast to previous phenomenological models that characterized the shape of the X-ray spectrum, but could not determine the physical parameters of the accretion flow. We describe the spectral fitting details and discuss the interpretation of the accretion flow physical parameters.« less

  15. Disordered λ φ4+ρ φ6 Landau-Ginzburg model

    NASA Astrophysics Data System (ADS)

    Diaz, R. Acosta; Svaiter, N. F.; Krein, G.; Zarro, C. A. D.

    2018-03-01

    We discuss a disordered λ φ4+ρ φ6 Landau-Ginzburg model defined in a d -dimensional space. First we adopt the standard procedure of averaging the disorder-dependent free energy of the model. The dominant contribution to this quantity is represented by a series of the replica partition functions of the system. Next, using the replica-symmetry ansatz in the saddle-point equations, we prove that the average free energy represents a system with multiple ground states with different order parameters. For low temperatures we show the presence of metastable equilibrium states for some replica fields for a range of values of the physical parameters. Finally, going beyond the mean-field approximation, the one-loop renormalization of this model is performed, in the leading-order replica partition function.

  16. Comparing Families of Dynamic Causal Models

    PubMed Central

    Penny, Will D.; Stephan, Klaas E.; Daunizeau, Jean; Rosa, Maria J.; Friston, Karl J.; Schofield, Thomas M.; Leff, Alex P.

    2010-01-01

    Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data. PMID:20300649

  17. Ionospheric absorption, typical ionization, conductivity, and possible synoptic heating parameters in the upper atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, J.K.; Bhatnagar, V.P.

    1989-04-01

    Relations for the average energetic particle heating and the typical Hall and Pedersen conductances, as functions of the ground-based Hf radio absorption, are determined. Collis and coworkers used the geosynchronous GEOS 2 particle data to relate or ''calibrate'' the auroral absorption on the same magnetic field lines with five levels of D region ionization. These ionospheric models are related to a Chapman layer that extends these models into the E region. The average energetic particle heating is calculated for each of these models using recent expressions for the effective recombination coefficient. The corresponding height-integrated heating rates are determined and relatedmore » to the absorption with a quadratic expression. The average Hall and Pedersen conductivities are calculated for each of the nominal absorption ionospheric models. The corresponding height-integrated conductances for nighttime conditions are determined and related to the absorption. Expressions for these conductances during disturbed sunlit conditions are also determined. These relations can be used in conjunction with simultaneous ground-based riometric and magnetic observations to determines the average Hall and Pedersen currents and the Joule heating. The typical daily rate of temperature increase in the mesosphere for storm conditions is several 10 K for both the energetic particle and the Joule heating. The increasing importance of these parameters of the upper and middle atmospheres is discussed. It is proposed that northern hemisphere ionospheric, current, and heating synoptic models and parameters be investigated for use on a regular basis. copyright American Geophysical Union 1989« less

  18. Io's Heat Flow: A Model Including "Warm" Polar Regions

    NASA Astrophysics Data System (ADS)

    Veeder, G. J.; Matson, D. L.; Johnson, T. V.; Davies, A. G.; Blaney, D. L.

    2002-12-01

    Some 90 percent of Io's surface is thermally "passive" material. It is separate from the sites of active volcanic eruptions. Though "passive", its thermal behavior continues to be a challenge for modelers. The usual approach is to take albedo, average daytime temperature, temperature as a function of time of day, etc., and attempt to match these constraints with a uniform surface with a single value of thermal inertia. Io is a case where even globally averaged observations are inconsistent with a single-thermal-inertia model approach. The Veeder et al. (1994) model for "passive" thermal emission addressed seven constraints derived from a decade of ground-based, global observations - average albedo plus infrared fluxes at three separate wavelengths (4.8, 8.7, and 20 microns) for both daytime and eclipsed conditions. This model has only two components - a unit of infinite thermal inertia and a unit of zero thermal inertia. The free parameters are the areal coverage ratio of the two units and their relative albedos (constrained to match the known average albedo). This two-parameter model agreed with the global radiometric data and also predicted significantly higher non-volcanic nighttime temperatures than traditional ("lunar-like") single thermal inertia models. Recent observations from the Galileo infrared radiometer show relatively uniform minimum-night-time temperatures. In particular, they show little variation with either latitude or time of night (Spencer et al., 2000; Rathbun et al., 2002). Additionally, detailed analyses of Io's scattering properties and reflectance variations have led to the interesting conclusion that Io's albedo at regional scales varies little with latitude (Simonelli, et al., 2001). This effectively adds four new observational constraints - lack of albedo variation with latitude, average minimum nighttime temperature and lack of variation of temperature with either latitude or longitude. We have made the fewest modifications necessary for the Veeder et al. model to match these new constrains - we added two model parameters to characterize the volcanically heated high-latitude units. These are the latitude above which the unit exists and its nighttime temperature. The resulting four-parameter model is the first that encompasses all of the available observations of Io's thermal emission and that quantitatively satisfies all eleven observational constraints. While no model is unique, this model is significant because it is the first to accommodate widespread polar regions that are relatively "warm". This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA.

  19. Carbon-13 and proton nuclear magnetic resonance analysis of shale-derived refinery products and jet fuels and of experimental referee broadened-specification jet fuels

    NASA Technical Reports Server (NTRS)

    Dalling, D. K.; Bailey, B. K.; Pugmire, R. J.

    1984-01-01

    A proton and carbon-13 nuclear magnetic resonance (NMR) study was conducted of Ashland shale oil refinery products, experimental referee broadened-specification jet fuels, and of related isoprenoid model compounds. Supercritical fluid chromatography techniques using carbon dioxide were developed on a preparative scale, so that samples could be quantitatively separated into saturates and aromatic fractions for study by NMR. An optimized average parameter treatment was developed, and the NMR results were analyzed in terms of the resulting average parameters; formulation of model mixtures was demonstrated. Application of novel spectroscopic techniques to fuel samples was investigated.

  20. A computational model for biosonar echoes from foliage

    PubMed Central

    Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao

    2017-01-01

    Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals’ sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats. PMID:28817631

  1. A computational model for biosonar echoes from foliage.

    PubMed

    Ming, Chen; Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao; Müller, Rolf

    2017-01-01

    Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals' sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats.

  2. Flight Control of Biomimetic Air Vehicles Using Vibrational Control and Averaging

    NASA Astrophysics Data System (ADS)

    Tahmasian, Sevak; Woolsey, Craig A.

    2017-08-01

    A combination of vibrational inputs and state feedback is applied to control the flight of a biomimetic air vehicle. First, a control strategy is developed for longitudinal flight, using a quasi-steady aerodynamic model and neglecting wing inertial effects. Vertical and forward motion is controlled by modulating the wings' stroke and feather angles, respectively. Stabilizing control parameter values are determined using the time-averaged dynamic model. Simulations of a system resembling a hawkmoth show that the proposed controller can overcome modeling error associated with the wing inertia and small parameter uncertainties when following a prescribed trajectory. After introducing the approach through an application to longitudinal flight, the control strategy is extended to address flight in three-dimensional space.

  3. Polynomials to model the growth of young bulls in performance tests.

    PubMed

    Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B

    2014-03-01

    The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.

  4. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  5. Thomson scattering in the average-atom approximation.

    PubMed

    Johnson, W R; Nilsen, J; Cheng, K T

    2012-09-01

    The average-atom model is applied to study Thomson scattering of x-rays from warm dense matter with emphasis on scattering by bound electrons. Parameters needed to evaluate the dynamic structure function (chemical potential, average ionic charge, free electron density, bound and continuum wave functions, and occupation numbers) are obtained from the average-atom model. The resulting analysis provides a relatively simple diagnostic for use in connection with x-ray scattering measurements. Applications are given to dense hydrogen, beryllium, aluminum, and titanium plasmas. In the case of titanium, bound states are predicted to modify the spectrum significantly.

  6. An effective parameter optimization with radiation balance constraints in the CAM5

    NASA Astrophysics Data System (ADS)

    Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.

    2017-12-01

    Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.

  7. Effects of photosynthetic photon flux density, frequency, duty ratio, and their interactions on net photosynthetic rate of cos lettuce leaves under pulsed light: explanation based on photosynthetic-intermediate pool dynamics.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2018-06-01

    Square-wave pulsed light is characterized by three parameters, namely average photosynthetic photon flux density (PPFD), pulsed-light frequency, and duty ratio (the ratio of light-period duration to that of the light-dark cycle). In addition, the light-period PPFD is determined by the averaged PPFD and duty ratio. We investigated the effects of these parameters and their interactions on net photosynthetic rate (P n ) of cos lettuce leaves for every combination of parameters. Averaged PPFD values were 0-500 µmol m -2  s -1 . Frequency values were 0.1-1000 Hz. White LED arrays were used as the light source. Every parameter affected P n and interactions between parameters were observed for all combinations. The P n under pulsed light was lower than that measured under continuous light of the same averaged PPFD, and this difference was enhanced with decreasing frequency and increasing light-period PPFD. A mechanistic model was constructed to estimate the amount of stored photosynthetic intermediates over time under pulsed light. The results indicated that all effects of parameters and their interactions on P n were explainable by consideration of the dynamics of accumulation and consumption of photosynthetic intermediates.

  8. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  9. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  10. Mapping Surface Cover Parameters Using Aggregation Rules and Remotely Sensed Cover Classes. Version 1.9

    NASA Technical Reports Server (NTRS)

    Arain, Altaf M.; Shuttleworth, W. James; Yang, Z-Liang; Michaud, Jene; Dolman, Johannes

    1997-01-01

    A coupled model, which combines the Biosphere-Atmosphere Transfer Scheme (BATS) with an advanced atmospheric boundary-layer model, was used to validate hypothetical aggregation rules for BATS-specific surface cover parameters. The model was initialized and tested with observations from the Anglo-Brazilian Amazonian Climate Observational Study and used to simulate surface fluxes for rain forest and pasture mixes at a site near Manaus in Brazil. The aggregation rules are shown to estimate parameters which give area-average surface fluxes similar to those calculated with explicit representation of forest and pasture patches for a range of meteorological and surface conditions relevant to this site, but the agreement deteriorates somewhat when there are large patch-to-patch differences in soil moisture. The aggregation rules, validated as above, were then applied to remotely sensed 1 km land cover data set to obtain grid-average values of BATS vegetation parameters for 2.8 deg x 2.8 deg and 1 deg x 1 deg grids within the conterminous United States. There are significant differences in key vegetation parameters (aerodynamic roughness length, albedo, leaf area index, and stomatal resistance) when aggregate parameters are compared to parameters for the single, dominant cover within the grid. However, the surface energy fluxes calculated by stand-alone BATS with the 2-year forcing, data from the International Satellite Land Surface Climatology Project (ISLSCP) CDROM were reasonably similar using aggregate-vegetation parameters and dominant-cover parameters, but there were some significant differences, particularly in the western USA.

  11. Förster-type energy transfer as a probe for changes in local fluctuations of the protein matrix.

    PubMed

    Somogyi, B; Matkó, J; Papp, S; Hevessy, J; Welch, G R; Damjanovich, S

    1984-07-17

    Much evidence, on both theoretical and experimental sides, indicates the importance of local fluctuations (in energy levels, conformational substates, etc.) of the macromolecular matrix in the biological activity of proteins. We describe here a novel application of the Förster-type energy-transfer process capable of monitoring changes both in local fluctuations and in conformational states of macromolecules. A new energy-transfer parameter, f, is defined as an average transfer efficiency, [E], normalized by the actual average quantum efficiency of the donor fluorescence, [phi D]. A simple oscillator model (for a one donor-one acceptor system) is presented to show the sensitivity of this parameter to changes in amplitudes of local fluctuations. The different modes of averaging (static, dynamic, and intermediate cases) occurring for a given value of the average transfer rate, [kt], and the experimental requirements as well as limitations of the method are also discussed. The experimental tests were performed on the ribonuclease T1-pyridoxamine 5'-phosphate conjugate (a one donor-one acceptor system) by studying the change of the f parameter with temperature, an environmental parameter expectedly perturbing local fluctuations of proteins. The parameter f increased with increasing temperature as expected on the basis of the oscillator model, suggesting that it really reflects changes of fluctuation amplitudes (significant changes in the orientation factor, k2, as well as in the spectral properties of the fluorophores can be excluded by anisotropy measurements and spectral investigations). Possibilities of the general applicability of the method are also discussed.

  12. Streamwise Versus Spanwise Spacing of Obstacle Arrays: Parametrization of the Effects on Drag and Turbulence

    NASA Astrophysics Data System (ADS)

    Simón-Moral, Andres; Santiago, Jose Luis; Krayenhoff, E. Scott; Martilli, Alberto

    2014-06-01

    A Reynolds-averaged Navier-Stokes model is used to investigate the evolution of the sectional drag coefficient and turbulent length scales with the layouts of aligned arrays of cubes. Results show that the sectional drag coefficient is determined by the non-dimensional streamwise distance (sheltering parameter), and the non-dimensional spanwise distance (channelling parameter) between obstacles. This is different than previous approaches that consider only plan area density . On the other hand, turbulent length scales behave similarly to the staggered case (e. g. they are function of only). Analytical formulae are proposed for the length scales and for the sectional drag coefficient as a function of sheltering and channelling parameters, and implemented in a column model. This approach demonstrates good skill in the prediction of vertical profiles of the spatially-averaged horizontal wind speed.

  13. Mesoscopic fluctuations and intermittency in aging dynamics

    NASA Astrophysics Data System (ADS)

    Sibani, P.

    2006-01-01

    Mesoscopic aging systems are characterized by large intermittent noise fluctuations. In a record dynamics scenario (Sibani P. and Dall J., Europhys. Lett., 64 (2003) 8) these events, quakes, are treated as a Poisson process with average αln (1 + t/tw), where t is the observation time, tw is the age and α is a parameter. Assuming for simplicity that quakes constitute the only source of de-correlation, we present a model for the probability density function (PDF) of the configuration autocorrelation function. Beside α, the model has the average quake size 1/q as a parameter. The model autocorrelation PDF has a Gumbel-like shape, which approaches a Gaussian for large t/tw and becomes sharply peaked in the thermodynamic limit. Its average and variance, which are given analytically, depend on t/tw as a power law and a power law with a logarithmic correction, respectively. Most predictions are in good agreement with data from the literature and with the simulations of the Edwards-Anderson spin-glass carried out as a test.

  14. Interactive vs. Non-Interactive Ensembles for Weather Prediction and Climate Projection

    NASA Astrophysics Data System (ADS)

    Duane, Gregory

    2013-04-01

    If the members of an ensemble of different models are allowed to interact with one another in run time, predictive skill can be improved as compared to that of any individual model or any average of indvidual model outputs. Inter-model connections in such an interactive ensemble can be trained, using historical data, so that the resulting ``supermodel" synchronizes with reality when used in weather-prediction mode, where the individual models perform data assimilation from each other (with trainable inter-model "observation error") as well as from real observations. In climate-projection mode, parameters of the individual models are changed, as might occur from an increase in GHG levels, and one obtains relevant statistical properties of the new supermodel attractor. In simple cases, it has been shown that training of the inter-model connections with the old parameter values gives a supermodel that is still predictive when the parameter values are changed. Here we inquire as to the circumstances under which supermodel performance can be expected to exceed that of the customary weighted average of model outputs. We consider a supermodel formed from quasigeostrophic channel models with different forcing coefficients, and introduce an effective training scheme for the inter-model connections. We show that the blocked-zonal index cycle is reproduced better by the supermodel than by any non-interactive ensemble in the extreme case where the forcing coefficients of the different models are very large or very small. With realistic differences in forcing coefficients, as would be representative of actual differences among IPCC-class models, the usual linearity assumption is justified and a weighted average of model outputs is adequate. It is therefore hypothesized that supermodeling is likely to be useful in situations where there are qualitative model differences, as arising from sub-gridscale parameterizations, that affect overall model behavior. Otherwise the usual ex post facto averaging will probably suffice. Previous results from an ENSO-prediction supermodel [Kirtman et al.] are re-examined in light of the hypothesis about the importance of qualitative inter-model differences.

  15. Comparison of free-breathing with navigator-controlled acquisition regimes in abdominal diffusion-weighted magnetic resonance images: Effect on ADC and IVIM statistics.

    PubMed

    Jerome, Neil P; Orton, Matthew R; d'Arcy, James A; Collins, David J; Koh, Dow-Mu; Leach, Martin O

    2014-01-01

    To evaluate the effect on diffusion-weighted image-derived parameters in the apparent diffusion coefficient (ADC) and intra-voxel incoherent motion (IVIM) models from choice of either free-breathing or navigator-controlled acquisition. Imaging was performed with consent from healthy volunteers (n = 10) on a 1.5T Siemens Avanto scanner. Parameter-matched free-breathing and navigator-controlled diffusion-weighted images were acquired, without averaging in the console, for a total scan time of ∼10 minutes. Regions of interest were drawn for renal cortex, renal pyramid, whole kidney, liver, spleen, and paraspinal muscle. An ADC diffusion model for these regions was fitted for b-values ≥ 250 s/mm(2) , using a Levenberg-Marquardt algorithm, and an IVIM model was fitted for all images using a Bayesian method. ADC and IVIM parameters from the two acquisition regimes show no significant differences for the cohort; individual cases show occasional discrepancies, with outliers in parameter estimates arising more commonly from navigator-controlled scans. The navigator-controlled acquisitions showed, on average, a smaller range of movement for the kidneys (6.0 ± 1.4 vs. 10.0 ± 1.7 mm, P = 0.03), but also a smaller number of averages collected (3.9 ± 0.1 vs. 5.5 ± 0.2, P < 0.01) in the allocated time. Navigator triggering offers no advantage in fitted diffusion parameters, whereas free-breathing appears to offer greater confidence in fitted diffusion parameters, with fewer outliers, for matched acquisition periods. Copyright © 2013 Wiley Periodicals, Inc.

  16. Relationship between road traffic accidents and conflicts recorded by drive recorders.

    PubMed

    Lu, Guangquan; Cheng, Bo; Kuzumaki, Seigo; Mei, Bingsong

    2011-08-01

    Road traffic conflicts can be used to estimate the probability of accident occurrence, assess road safety, or evaluate road safety programs if the relationship between road traffic accidents and conflicts is known. To this end, we propose a model for the relationship between road traffic accidents and conflicts recorded by drive recorders (DRs). DRs were installed in 50 cars in Beijing to collect records of traffic conflicts. Data containing 1366 conflicts were collected in 193 days. The hourly distributions of conflicts and accidents were used to model the relationship between accidents and conflicts. To eliminate time series and base number effects, we defined and used 2 parameters: average annual number of accidents per 10,000 vehicles per hour and average number of conflicts per 10,000 vehicles per hour. A model was developed to describe the relationship between the two parameters. If A(i) = average annual number of accidents per 10,000 vehicles per hour at hour i, and E(i) = average number of conflicts per 10,000 vehicles per hour at hour i, the relationship can be expressed as [Formula in text] (α>0, β>0). The average number of traffic accidents increases as the number of conflicts rises, but the rate of increase decelerates as the number of conflicts increases further. The proposed model can describe the relationship between road traffic accidents and conflicts in a simple manner. According to our analysis, the model fits the present data.

  17. Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects

    NASA Astrophysics Data System (ADS)

    Korhola, T.; Kokkola, H.; Korhonen, H.; Partanen, A.-I.; Laaksonen, A.; Lehtinen, K. E. J.; Romakkaniemi, S.

    2013-08-01

    In atmospheric modelling applications the aerosol particle size distribution is commonly represented by modal approach, in which particles in different size ranges are described with log-normal modes within predetermined size ranges. Such method includes numerical reallocation of particles from a mode to another for example during particle growth, leading to potentially artificial changes in the aerosol size distribution. In this study we analysed how this reallocation affects climatologically relevant parameters: cloud droplet number concentration, aerosol-cloud interaction coefficient and light extinction coefficient. We compared these parameters between a modal model with and without reallocation routines, and a high resolution sectional model that was considered as a reference model. We analysed the relative differences of the parameters in different experiments that were designed to cover a wide range of dynamic aerosol processes occurring in the atmosphere. According to our results, limiting the allowed size ranges of the modes and the following numerical remapping of the distribution by reallocation, leads on average to underestimation of cloud droplet number concentration (up to 100%) and overestimation of light extinction (up to 20%). The analysis of aerosol first indirect effect is more complicated as the ACI parameter can be either over- or underestimated by the reallocating model, depending on the conditions. However, for example in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause around average 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.

  18. Effects of spatial variability and scale on areal -average evapotranspiration

    NASA Technical Reports Server (NTRS)

    Famiglietti, J. S.; Wood, Eric F.

    1993-01-01

    This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.

  19. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  20. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    PubMed

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  1. Testing averaged cosmology with type Ia supernovae and BAO data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less

  2. Prognostic characteristics of the lowest-mode internal waves in the Sea of Okhotsk

    NASA Astrophysics Data System (ADS)

    Kurkin, Andrey; Kurkina, Oxana; Zaytsev, Andrey; Rybin, Artem; Talipova, Tatiana

    2017-04-01

    The nonlinear dynamics of short-period internal waves on ocean shelves is well described by generalized nonlinear evolutionary models of Korteweg - de Vries type. Parameters of these models such as long wave propagation speed, nonlinear and dispersive coefficients can be calculated using hydrological data (sea water density stratification), and therefore have geographical and seasonal variations. The internal wave parameters for the basin of the Sea of Okhotsk are computed on a base of recent version of hydrological data source GDEM V3.0. Geographical and seasonal variability of internal wave characteristics is investigated. It is shown that annually or seasonally averaged data can be used for linear parameters. The nonlinear parameters are more sensitive to temporal averaging of hydrological data and detailed data are preferable to use. The zones for nonlinear parameters to change their signs (so-called "turning points") are selected. Possible internal waveforms appearing in the process of internal tide transformation including the solitary waves changing polarities are simulated for the hydrological conditions in the Sea of Okhotsk shelf to demonstrate different scenarios of internal wave adjustment, transformation, refraction and cylindrical divergence.

  3. Parameter prediction based on Improved Process neural network and ARMA error compensation in Evaporation Process

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoshan

    2018-01-01

    The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.

  4. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  5. Seismic properties of the crust and uppermost mantle of North America

    NASA Technical Reports Server (NTRS)

    Braile, L. W.; Hinze, W. J.; Vonfrese, R. R. B.; Keller, G. R.

    1983-01-01

    Seismic refraction profiles for the North American continent were compiled. The crustal models compiled data on the upper mantle seismic velocity (P sub n), the crustal thickness (H sub c) and the average seismic velocity of the crystalline crust (V sub p). Compressional wave parameters were compared with shear wave data derived from surface wave dispersion models and indicate an average value for Poisson's ratio of 0.252 for the crust and of 0.273 for the uppermost mantle. Contour maps illustrate lateral variations in crustal thickness, upper mantle velocity and average seismic velocity of the crystalline crust. The distribution of seismic parameters are compared with a smoothed free air anomaly map of North America and indicate that a complidated mechanism of isostatic compensation exists for the North American continent. Several features on the seismic contour maps also correlate with regional magnetic anomalies.

  6. Nonlinear-regression flow model of the Gulf Coast aquifer systems in the south-central United States

    USGS Publications Warehouse

    Kuiper, L.K.

    1994-01-01

    A multiple-regression methodology was used to help answer questions concerning model reliability, and to calibrate a time-dependent variable-density ground-water flow model of the gulf coast aquifer systems in the south-central United States. More than 40 regression models with 2 to 31 regressions parameters are used and detailed results are presented for 12 of the models. More than 3,000 values for grid-element volume-averaged head and hydraulic conductivity are used for the regression model observations. Calculated prediction interval half widths, though perhaps inaccurate due to a lack of normality of the residuals, are the smallest for models with only four regression parameters. In addition, the root-mean weighted residual decreases very little with an increase in the number of regression parameters. The various models showed considerable overlap between the prediction inter- vals for shallow head and hydraulic conductivity. Approximate 95-percent prediction interval half widths for volume-averaged freshwater head exceed 108 feet; for volume-averaged base 10 logarithm hydraulic conductivity, they exceed 0.89. All of the models are unreliable for the prediction of head and ground-water flow in the deeper parts of the aquifer systems, including the amount of flow coming from the underlying geopressured zone. Truncating the domain of solution of one model to exclude that part of the system having a ground-water density greater than 1.005 grams per cubic centimeter or to exclude that part of the systems below a depth of 3,000 feet, and setting the density to that of freshwater does not appreciably change the results for head and ground-water flow, except for locations close to the truncation surface.

  7. Identification of multivariable nonlinear systems in the presence of colored noises using iterative hierarchical least squares algorithm.

    PubMed

    Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam

    2014-07-01

    This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Tuning a physically-based model of the air-sea gas transfer velocity

    NASA Astrophysics Data System (ADS)

    Jeffery, C. D.; Robinson, I. S.; Woolf, D. K.

    Air-sea gas transfer velocities are estimated for one year using a 1-D upper-ocean model (GOTM) and a modified version of the NOAA-COARE transfer velocity parameterization. Tuning parameters are evaluated with the aim of bringing the physically based NOAA-COARE parameterization in line with current estimates, based on simple wind-speed dependent models derived from bomb-radiocarbon inventories and deliberate tracer release experiments. We suggest that A = 1.3 and B = 1.0, for the sub-layer scaling parameter and the bubble mediated exchange, respectively, are consistent with the global average CO 2 transfer velocity k. Using these parameters and a simple 2nd order polynomial approximation, with respect to wind speed, we estimate a global annual average k for CO 2 of 16.4 ± 5.6 cm h -1 when using global mean winds of 6.89 m s -1 from the NCEP/NCAR Reanalysis 1 1954-2000. The tuned model can be used to predict the transfer velocity of any gas, with appropriate treatment of the dependence on molecular properties including the strong solubility dependence of bubble-mediated transfer. For example, an initial estimate of the global average transfer velocity of DMS (a relatively soluble gas) is only 11.9 cm h -1 whilst for less soluble methane the estimate is 18.0 cm h -1.

  9. Gaussian quadrature exponential sum modeling of near infrared methane laboratory spectra obtained at temperatures from 106 to 297 K

    NASA Technical Reports Server (NTRS)

    Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.

    1990-01-01

    Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.

  10. Estimation of the sea surface's two-scale backscatter parameters

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1978-01-01

    The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.

  11. Studies into the averaging problem: Macroscopic gravity and precision cosmology

    NASA Astrophysics Data System (ADS)

    Wijenayake, Tharake S.

    2016-08-01

    With the tremendous improvement in the precision of available astrophysical data in the recent past, it becomes increasingly important to examine some of the underlying assumptions behind the standard model of cosmology and take into consideration nonlinear and relativistic corrections which may affect it at percent precision level. Due to its mathematical rigor and fully covariant and exact nature, Zalaletdinov's macroscopic gravity (MG) is arguably one of the most promising frameworks to explore nonlinearities due to inhomogeneities in the real Universe. We study the application of MG to precision cosmology, focusing on developing a self-consistent cosmology model built on the averaging framework that adequately describes the large-scale Universe and can be used to study real data sets. We first implement an algorithmic procedure using computer algebra systems to explore new exact solutions to the MG field equations. After validating the process with an existing isotropic solution, we derive a new homogeneous, anisotropic and exact solution. Next, we use the simplest (and currently only) solvable homogeneous and isotropic model of MG and obtain an observable function for cosmological expansion using some reasonable assumptions on light propagation. We find that the principal modification to the angular diameter distance is through the change in the expansion history. We then linearize the MG field equations and derive a framework that contains large-scale structure, but the small scale inhomogeneities have been smoothed out and encapsulated into an additional cosmological parameter representing the averaging effect. We derive an expression for the evolution of the density contrast and peculiar velocities and integrate them to study the growth rate of large-scale structure. We find that increasing the magnitude of the averaging term leads to enhanced growth at late times. Thus, for the same matter content, the growth rate of large scale structure in the MG model is stronger than that of the standard model. Finally, we constrain the MG model using Cosmic Microwave Background temperature anisotropy data, the distance to supernovae data, the galaxy power spectrum, the weak lensing tomography shear-shear cross-correlations and the baryonic acoustic oscillations. We find that for this model the averaging density parameter is very small and does not cause any significant shift in the other cosmological parameters. However, it can lead to increased errors on some cosmological parameters such as the Hubble constant and the amplitude of the linear matter spectrum at the scale of 8h. {-1}Mpc. Further studiesare needed to explore other solutions and models of MG as well as their effects on precision cosmology.

  12. Optical photon transport in powdered-phosphor scintillators. Part II. Calculation of single-scattering transport parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poludniowski, Gavin G.; Evans, Philip M.

    2013-04-15

    Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii)more » suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.« less

  13. Surveying implicit solvent models for estimating small molecule absolute hydration free energies

    PubMed Central

    Knight, Jennifer L.

    2011-01-01

    Implicit solvent models are powerful tools in accounting for the aqueous environment at a fraction of the computational expense of explicit solvent representations. Here, we compare the ability of common implicit solvent models (TC, OBC, OBC2, GBMV, GBMV2, GBSW, GBSW/MS, GBSW/MS2 and FACTS) to reproduce experimental absolute hydration free energies for a series of 499 small neutral molecules that are modeled using AMBER/GAFF parameters and AM1-BCC charges. Given optimized surface tension coefficients for scaling the surface area term in the nonpolar contribution, most implicit solvent models demonstrate reasonable agreement with extensive explicit solvent simulations (average difference 1.0-1.7 kcal/mol and R2=0.81-0.91) and with experimental hydration free energies (average unsigned errors=1.1-1.4 kcal/mol and R2=0.66-0.81). Chemical classes of compounds are identified that need further optimization of their ligand force field parameters and others that require improvement in the physical parameters of the implicit solvent models themselves. More sophisticated nonpolar models are also likely necessary to more effectively represent the underlying physics of solvation and take the quality of hydration free energies estimated from implicit solvent models to the next level. PMID:21735452

  14. On the use of the generalized SPRT method in the equivalent hard sphere approximation for nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Noguere, Gilles; Archier, Pascal; Bouland, Olivier; Capote, Roberto; Jean, Cyrille De Saint; Kopecky, Stefan; Schillebeeckx, Peter; Sirakov, Ivan; Tamagno, Pierre

    2017-09-01

    A consistent description of the neutron cross sections from thermal energy up to the MeV region is challenging. One of the first steps consists in optimizing the optical model parameters using average resonance parameters, such as the neutron strength functions. They can be derived from a statistical analysis of the resolved resonance parameters, or calculated with the generalized form of the SPRT method by using scattering matrix elements provided by optical model calculations. One of the difficulties is to establish the contributions of the direct and compound nucleus reactions. This problem was solved by using a slightly modified average R-Matrix formula with an equivalent hard sphere radius deduced from the phase shift originating from the potential. The performances of the proposed formalism are illustrated with results obtained for the 238U+n nuclear systems.

  15. Application and Evaluation of a Snowmelt Runoff Model in the Tamor River Basin, Eastern Himalaya Using a Markov Chain Monte Carlo (MCMC) Data Assimilation Approach

    NASA Technical Reports Server (NTRS)

    Panday, Prajjwal K.; Williams, Christopher A.; Frey, Karen E.; Brown, Molly E.

    2013-01-01

    Previous studies have drawn attention to substantial hydrological changes taking place in mountainous watersheds where hydrology is dominated by cryospheric processes. Modelling is an important tool for understanding these changes but is particularly challenging in mountainous terrain owing to scarcity of ground observations and uncertainty of model parameters across space and time. This study utilizes a Markov Chain Monte Carlo data assimilation approach to examine and evaluate the performance of a conceptual, degree-day snowmelt runoff model applied in the Tamor River basin in the eastern Nepalese Himalaya. The snowmelt runoff model is calibrated using daily streamflow from 2002 to 2006 with fairly high accuracy (average Nash-Sutcliffe metric approx. 0.84, annual volume bias <3%). The Markov Chain Monte Carlo approach constrains the parameters to which the model is most sensitive (e.g. lapse rate and recession coefficient) and maximizes model fit and performance. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall compared with simulations using observed station precipitation. The average snowmelt contribution to total runoff in the Tamor River basin for the 2002-2006 period is estimated to be 29.7+/-2.9% (which includes 4.2+/-0.9% from snowfall that promptly melts), whereas 70.3+/-2.6% is attributed to contributions from rainfall. On average, the elevation zone in the 4000-5500m range contributes the most to basin runoff, averaging 56.9+/-3.6% of all snowmelt input and 28.9+/-1.1% of all rainfall input to runoff. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall versus snowmelt compared with simulations using observed station precipitation. Model experiments indicate that the hydrograph itself does not constrain estimates of snowmelt versus rainfall contributions to total outflow but that this derives from the degree-day melting model. Lastly, we demonstrate that the data assimilation approach is useful for quantifying and reducing uncertainty related to model parameters and thus provides uncertainty bounds on snowmelt and rainfall contributions in such mountainous watersheds.

  16. Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.

    2014-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.

  17. Application of a time-magnitude prediction model for earthquakes

    NASA Astrophysics Data System (ADS)

    An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He

    2007-06-01

    In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.

  18. Investigations on the sensitivity of the computer code TURBO-2D

    NASA Astrophysics Data System (ADS)

    Amon, B.

    1994-12-01

    The two-dimensional computer model TURBO-2D for the calculation of two-phase flow was used to calculate the cold injection of fuel into a model chamber. Investigations of the influence of the input parameter on its sensitivity relative to the obtained results were made. In addition to that calculations were performed and compared using experimental injection pressure data and corresponding averaged injection parameter.

  19. Influence of optic disc size on the diagnostic performance of macular ganglion cell complex and peripapillary retinal nerve fiber layer analyses in glaucoma.

    PubMed

    Cordeiro, Daniela Valença; Lima, Verônica Castro; Castro, Dinorah P; Castro, Leonardo C; Pacheco, Maria Angélica; Lee, Jae Min; Dimantas, Marcelo I; Prata, Tiago Santos

    2011-01-01

    To evaluate the influence of optic disc size on the diagnostic accuracy of macular ganglion cell complex (GCC) and conventional peripapillary retinal nerve fiber layer (pRNFL) analyses provided by spectral domain optical coherence tomography (SD-OCT) in glaucoma. Eighty-two glaucoma patients and 30 healthy subjects were included. All patients underwent GCC (7 × 7 mm macular grid, consisting of RNFL, ganglion cell and inner plexiform layers) and pRNFL thickness measurement (3.45 mm circular scan) by SD-OCT. One eye was randomly selected for analysis. Initially, receiver operating characteristic (ROC) curves were generated for different GCC and pRNFL parameters. The effect of disc area on the diagnostic accuracy of these parameters was evaluated using a logistic ROC regression model. Subsequently, 1.5, 2.0, and 2.5 mm(2) disc sizes were arbitrarily chosen (based on data distribution) and the predicted areas under the ROC curves (AUCs) and sensitivities were compared at fixed specificities for each. Average mean deviation index for glaucomatous eyes was -5.3 ± 5.2 dB. Similar AUCs were found for the best pRNFL (average thickness = 0.872) and GCC parameters (average thickness = 0.824; P = 0.19). The coefficient representing disc area in the ROC regression model was not statistically significant for average pRNFL thickness (-0.176) or average GCC thickness (0.088; P ≥ 0.56). AUCs for fixed disc areas (1.5, 2.0, and 2.5 mm(2)) were 0.904, 0.891, and 0.875 for average pRNFL thickness and 0.834, 0.842, and 0.851 for average GCC thickness, respectively. The highest sensitivities - at 80% specificity for average pRNFL (84.5%) and GCC thicknesses (74.5%) - were found with disc sizes fixed at 1.5 mm(2) and 2.5 mm(2). Diagnostic accuracy was similar between pRNFL and GCC thickness parameters. Although not statistically significant, there was a trend for a better diagnostic accuracy of pRNFL thickness measurement in cases of smaller discs. For GCC analysis, an inverse effect was observed.

  20. Modeling and design of Galfenol unimorph energy harvesters

    NASA Astrophysics Data System (ADS)

    Deng, Zhangxian; Dapino, Marcelo J.

    2015-12-01

    This article investigates the modeling and design of vibration energy harvesters that utilize iron-gallium (Galfenol) as a magnetoelastic transducer. Galfenol unimorphs are of particular interest; however, advanced models and design tools are lacking for these devices. Experimental measurements are presented for various unimorph beam geometries. A maximum average power density of 24.4 {mW} {{cm}}-3 and peak power density of 63.6 {mW} {{cm}}-3 are observed. A modeling framework with fully coupled magnetoelastic dynamics, formulated as a 2D finite element model, and lumped-parameter electrical dynamics is presented and validated. A comprehensive parametric study considering pickup coil dimensions, beam thickness ratio, tip mass, bias magnet location, and remanent flux density (supplied by bias magnets) is developed for a 200 Hz, 9.8 {{m}} {{{s}}}-2 amplitude harmonic base excitation. For the set of optimal parameters, the maximum average power density and peak power density computed by the model are 28.1 and 97.6 {mW} {{cm}}-3, respectively.

  1. Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data

    NASA Astrophysics Data System (ADS)

    Shulenin, V. P.

    2016-10-01

    Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.

  2. Reduction of the dimension of neural network models in problems of pattern recognition and forecasting

    NASA Astrophysics Data System (ADS)

    Nasertdinova, A. D.; Bochkarev, V. V.

    2017-11-01

    Deep neural networks with a large number of parameters are a powerful tool for solving problems of pattern recognition, prediction and classification. Nevertheless, overfitting remains a serious problem in the use of such networks. A method of solving the problem of overfitting is proposed in this article. This method is based on reducing the number of independent parameters of a neural network model using the principal component analysis, and can be implemented using existing libraries of neural computing. The algorithm was tested on the problem of recognition of handwritten symbols from the MNIST database, as well as on the task of predicting time series (rows of the average monthly number of sunspots and series of the Lorentz system were used). It is shown that the application of the principal component analysis enables reducing the number of parameters of the neural network model when the results are good. The average error rate for the recognition of handwritten figures from the MNIST database was 1.12% (which is comparable to the results obtained using the "Deep training" methods), while the number of parameters of the neural network can be reduced to 130 times.

  3. Chaos control of Hastings-Powell model by combining chaotic motions.

    PubMed

    Danca, Marius-F; Chattopadhyay, Joydev

    2016-04-01

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.

  4. Chaos control of Hastings-Powell model by combining chaotic motions

    NASA Astrophysics Data System (ADS)

    Danca, Marius-F.; Chattopadhyay, Joydev

    2016-04-01

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.

  5. On averaging aspect ratios and distortion parameters over ice crystal population ensembles for estimating effective scattering asymmetry parameters

    PubMed Central

    van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian

    2017-01-01

    The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127

  6. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    NASA Astrophysics Data System (ADS)

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  7. Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Abdol-Hamid, Khaled S.

    2005-01-01

    Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.

  8. Dynamics of a prey-predator system under Poisson white noise excitation

    NASA Astrophysics Data System (ADS)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  9. Mg I as a probe of the solar chromosphere - The atomic model

    NASA Technical Reports Server (NTRS)

    Mauas, Pablo J.; Avrett, Eugene H.; Loeser, Rudolf

    1988-01-01

    This paper presents a complete atomic model for Mg I line synthesis, where all the atomic parameters are based on recent experimental and theoretical data. It is shown how the computed profiles at 4571 A and 5173 A are influenced by the choice of these parameters and the number of levels included in the model atom. In addition, observed profiles of the 5173 A b2 line and theoretical profiles for comparison (based on a recent atmospheric model for the average quiet sun) are presented.

  10. Interactive vs. Non-Interactive Multi-Model Ensembles

    NASA Astrophysics Data System (ADS)

    Duane, G. S.

    2013-12-01

    If the members of an ensemble of different models are allowed to interact with one another in run time, predictive skill can be improved as compared to that of any individual model or any average of indvidual model outputs. Inter-model connections in such an interactive ensemble can be trained, using historical data, so that the resulting ``supermodel' synchronizes with reality when used in weather-prediction mode, where the individual models perform data assimilation from each other (with trainable inter-model 'observation error') as well as from real observations. In climate-projection mode, parameters of the individual models are changed, as might occur from an increase in GHG levels, and one obtains relevant statistical properties of the new supermodel attractor. In simple cases, it has been shown that training of the inter-model connections with the old parameter values gives a supermodel that is still predictive when the parameter values are changed. Here we inquire as to the circumstances under which supermodel performance can be expected to exceed that of the customary weighted average of model outputs. We consider a supermodel formed from quasigeostrophic (QG) channel models with different forcing coefficients, and introduce an effective training scheme for the inter-model connections. We show that the blocked-zonal index cycle is reproduced better by the supermodel than by any non-interactive ensemble in the extreme case where the forcing coefficients of the different models are very large or very small. With realistic differences in forcing coefficients, as would be representative of actual differences among IPCC-class models, the usual linearity assumption is justified and a weighted average of model outputs is adequate. It is therefore hypothesized that supermodeling is likely to be useful in situations where there are qualitative model differences, as arising from sub-gridscale parameterizations, that affect overall model behavior. Otherwise the usual ex post facto averaging will probably suffice. The advantage of supermodeling is seen in statistics such as anticorrelation between blocking activity in the Atlantic and Pacific sectors, in the case of the QG channel model, rather than in overall blocking frequency. Likewise in climate models, the advantage of supermodeling is typically manifest in higher-order statistics rather than in quantities such as mean temperature.

  11. Parameter-induced uncertainty quantification of a regional N2O and NO3 inventory using the biogeochemical model LandscapeDNDC

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Klatt, Steffen; Kraus, David; Werner, Christian; Ruiz, Ignacio Santa Barbara; Kiese, Ralf; Butterbach-Bahl, Klaus

    2014-05-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional and national scales and are outlined as the most advanced methodology (Tier 3) for national emission inventory in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems like arable land and grasslands and are thus thought to be widely applicable at various spatial and temporal scales. The high complexity of ecosystem processes mirrored by such models requires a large number of model parameters. Many of those parameters are lumped parameters describing simultaneously the effect of environmental drivers on e.g. microbial community activity and individual processes. Thus, the precise quantification of true parameter states is often difficult or even impossible. As a result model uncertainty is not solely originating from input uncertainty but also subject to parameter-induced uncertainty. In this study we quantify regional parameter-induced model uncertainty on nitrous oxide (N2O) emissions and nitrate (NO3) leaching from arable soils of Saxony (Germany) using the biogeochemical model LandscapeDNDC. For this we calculate a regional inventory using a joint parameter distribution for key parameters describing microbial C and N turnover processes as obtained by a Bayesian calibration study. We representatively sampled 400 different parameter vectors from the discrete joint parameter distribution comprising approximately 400,000 parameter combinations and used these to calculate 400 individual realizations of the regional inventory. The spatial domain (represented by 4042 polygons) is set up with spatially explicit soil and climate information and a region-typical 3-year crop rotation consisting of winter wheat, rape- seed, and winter barley. Average N2O emission from arable soils in the state of Saxony across all 400 realizations was 1.43 ± 1.25 [kg N / ha] with a median value of 1.05 [kg N / ha]. Using the default IPCC emission factor approach (Tier 1) for direct emissions reveal a higher average N2O emission of 1.51 [kg N / ha] due to fertilizer use. In the regional uncertainty quantification the 20% likelihood range for N2O emissions is 0.79 - 1.37 [kg N / ha] (50% likelihood: 0.46 - 2.05 [kg N / ha]; 90% likelihood: 0.11 - 4.03 [kg N / ha]). Respective quantities were calculated for nitrate leaching. The method has proven its applicability to quantify parameter-induced uncertainty of simulated regional greenhouse gas emission and nitrate leaching inventories using process based biogeochemical models.

  12. Comparison of different models for ground-level atmospheric turbulence strength (C(n)(2)) prediction with a new model according to local weather data for FSO applications.

    PubMed

    Arockia Bazil Raj, A; Arputha Vijaya Selvi, J; Durairaj, S

    2015-02-01

    Atmospheric parameters strongly affect the performance of free-space optical communication (FSOC) systems when the optical wave is propagating through the inhomogeneous turbulence transmission medium. Developing a model to get an accurate prediction of the atmospheric turbulence strength (C(n)(2)) according to meteorological parameters (weather data) becomes significant to understand the behavior of the FSOC channel during different seasons. The construction of a dedicated free-space optical link for the range of 0.5 km at an altitude of 15.25 m built at Thanjavur (Tamil Nadu) is described in this paper. The power level and beam centroid information of the received signal are measured continuously with weather data at the same time using an optoelectronic assembly and the developed weather station, respectively, and are recorded in a data-logging computer. Existing models that exhibit relatively fewer prediction errors are briefed and are selected for comparative analysis. Measured weather data (as input factors) and C(n)(2) (as a response factor) of size [177,147×4] are used for linear regression analysis and to design mathematical models more suitable in the test field. Along with the model formulation methodologies, we have presented the contributions of the input factors' individual and combined effects on the response surface and the coefficient of determination (R(2)) estimated using analysis of variance tools. An R(2) value of 98.93% is obtained using the new model, model equation V, from a confirmatory test conducted with a testing data set of size [2000×4]. In addition, the prediction accuracies of the selected and the new models are investigated during different seasons in a one-year period using the statistics of day, week-averaged, month-averaged, and seasonal-averaged diurnal Cn2 profiles, and are verified in terms of the sum of absolute error (SAE). A Cn2 prediction maximum average SAE of 2.3×10(-13)  m(-2/3) is achieved using the new model in a longer range of dynamic meteorological parameters during the different local seasons.

  13. Experimental Research and Mathematical Modeling of Parameters Effecting on Cutting Force and SurfaceRoughness in CNC Turning Process

    NASA Astrophysics Data System (ADS)

    Zeqiri, F.; Alkan, M.; Kaya, B.; Toros, S.

    2018-01-01

    In this paper, the effects of cutting parameters on cutting forces and surface roughness based on Taguchi experimental design method are determined. Taguchi L9 orthogonal array is used to investigate the effects of machining parameters. Optimal cutting conditions are determined using the signal/noise (S/N) ratio which is calculated by average surface roughness and cutting force. Using results of analysis, effects of parameters on both average surface roughness and cutting forces are calculated on Minitab 17 using ANOVA method. The material that was investigated is Inconel 625 steel for two cases with heat treatment and without heat treatment. The predicted and calculated values with measurement are very close to each other. Confirmation test of results showed that the Taguchi method was very successful in the optimization of machining parameters for maximum surface roughness and cutting forces in the CNC turning process.

  14. Individual Colorimetric Observer Model

    PubMed Central

    Asano, Yuta; Fairchild, Mark D.; Blondé, Laurent

    2016-01-01

    This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches. PMID:26862905

  15. Using polarizable POSSIM force field and fuzzy-border continuum solvent model to calculate pK(a) shifts of protein residues.

    PubMed

    Sharma, Ity; Kaminski, George A

    2017-01-15

    Our Fuzzy-Border (FB) continuum solvent model has been extended and modified to produce hydration parameters for small molecules using POlarizable Simulations Second-order Interaction Model (POSSIM) framework with an average error of 0.136 kcal/mol. It was then used to compute pK a shifts for carboxylic and basic residues of the turkey ovomucoid third domain (OMTKY3) protein. The average unsigned errors in the acid and base pK a values were 0.37 and 0.4 pH units, respectively, versus 0.58 and 0.7 pH units as calculated with a previous version of polarizable protein force field and Poisson Boltzmann continuum solvent. This POSSIM/FB result is produced with explicit refitting of the hydration parameters to the pK a values of the carboxylic and basic residues of the OMTKY3 protein; thus, the values of the acidity constants can be viewed as additional fitting target data. In addition to calculating pK a shifts for the OMTKY3 residues, we have studied aspartic acid residues of Rnase Sa. This was done without any further refitting of the parameters and agreement with the experimental pK a values is within an average unsigned error of 0.65 pH units. This result included the Asp79 residue that is buried and thus has a high experimental pK a value of 7.37 units. Thus, the presented model is capable or reproducing pK a results for residues in an environment that is significantly different from the solvated protein surface used in the fitting. Therefore, the POSSIM force field and the FB continuum solvent parameters have been demonstrated to be sufficiently robust and transferable. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  17. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  18. Muscle Force-Velocity Relationships Observed in Four Different Functional Tests.

    PubMed

    Zivkovic, Milena Z; Djuric, Sasa; Cuk, Ivan; Suzovic, Dejan; Jaric, Slobodan

    2017-02-01

    The aims of the present study were to investigate the shape and strength of the force-velocity relationships observed in different functional movement tests and explore the parameters depicting force, velocity and power producing capacities of the tested muscles. Twelve subjects were tested on maximum performance in vertical jumps, cycling, bench press throws, and bench pulls performed against different loads. Thereafter, both the averaged and maximum force and velocity variables recorded from individual trials were used for force-velocity relationship modeling. The observed individual force-velocity relationships were exceptionally strong (median correlation coefficients ranged from r = 0.930 to r = 0.995) and approximately linear independently of the test and variable type. Most of the relationship parameters observed from the averaged and maximum force and velocity variable types were strongly related in all tests (r = 0.789-0.991), except for those in vertical jumps (r = 0.485-0.930). However, the generalizability of the force-velocity relationship parameters depicting maximum force, velocity and power of the tested muscles across different tests was inconsistent and on average moderate. We concluded that the linear force-velocity relationship model based on either maximum or averaged force-velocity data could provide the outcomes depicting force, velocity and power generating capacity of the tested muscles, although such outcomes can only be partially generalized across different muscles.

  19. Muscle Force-Velocity Relationships Observed in Four Different Functional Tests

    PubMed Central

    Zivkovic, Milena Z.; Djuric, Sasa; Cuk, Ivan; Suzovic, Dejan; Jaric, Slobodan

    2017-01-01

    Abstract The aims of the present study were to investigate the shape and strength of the force-velocity relationships observed in different functional movement tests and explore the parameters depicting force, velocity and power producing capacities of the tested muscles. Twelve subjects were tested on maximum performance in vertical jumps, cycling, bench press throws, and bench pulls performed against different loads. Thereafter, both the averaged and maximum force and velocity variables recorded from individual trials were used for force–velocity relationship modeling. The observed individual force-velocity relationships were exceptionally strong (median correlation coefficients ranged from r = 0.930 to r = 0.995) and approximately linear independently of the test and variable type. Most of the relationship parameters observed from the averaged and maximum force and velocity variable types were strongly related in all tests (r = 0.789-0.991), except for those in vertical jumps (r = 0.485-0.930). However, the generalizability of the force-velocity relationship parameters depicting maximum force, velocity and power of the tested muscles across different tests was inconsistent and on average moderate. We concluded that the linear force-velocity relationship model based on either maximum or averaged force-velocity data could provide the outcomes depicting force, velocity and power generating capacity of the tested muscles, although such outcomes can only be partially generalized across different muscles. PMID:28469742

  20. Kumaraswamy autoregressive moving average models for double bounded environmental data

    NASA Astrophysics Data System (ADS)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  1. Determination of Indicators of Ecological Change

    DTIC Science & Technology

    2004-09-01

    simultaneously characterized parameters for more than one forest (e.g., Huber and Iroume, 2001; Tobón Marin et al., 2000). As parameters (e.g...necessary to apply the revised model for use in five forest biomes , 2) use the model to predict precipitation interception and compare the measured and...larger interception losses than many other forest biomes . The within plot sampling coefficient of variation, ranging from a study average of 0.11 in

  2. Thermodynamic characterization of tandem mismatches found in naturally occurring RNA

    PubMed Central

    Christiansen, Martha E.; Znosko, Brent M.

    2009-01-01

    Although all sequence symmetric tandem mismatches and some sequence asymmetric tandem mismatches have been thermodynamically characterized and a model has been proposed to predict the stability of previously unmeasured sequence asymmetric tandem mismatches [Christiansen,M.E. and Znosko,B.M. (2008) Biochemistry, 47, 4329–4336], experimental thermodynamic data for frequently occurring tandem mismatches is lacking. Since experimental data is preferred over a predictive model, the thermodynamic parameters for 25 frequently occurring tandem mismatches were determined. These new experimental values, on average, are 1.0 kcal/mol different from the values predicted for these mismatches using the previous model. The data for the sequence asymmetric tandem mismatches reported here were then combined with the data for 72 sequence asymmetric tandem mismatches that were published previously, and the parameters used to predict the thermodynamics of previously unmeasured sequence asymmetric tandem mismatches were updated. The average absolute difference between the measured values and the values predicted using these updated parameters is 0.5 kcal/mol. This updated model improves the prediction for tandem mismatches that were predicted rather poorly by the previous model. This new experimental data and updated predictive model allow for more accurate calculations of the free energy of RNA duplexes containing tandem mismatches, and, furthermore, should allow for improved prediction of secondary structure from sequence. PMID:19509311

  3. A Numerical-Analytical Approach to Modeling the Axial Rotation of the Earth

    NASA Astrophysics Data System (ADS)

    Markov, Yu. G.; Perepelkin, V. V.; Rykhlova, L. V.; Filippova, A. S.

    2018-04-01

    A model for the non-uniform axial rotation of the Earth is studied using a celestial-mechanical approach and numerical simulations. The application of an approximate model containing a small number of parameters to predict variations of the axial rotation velocity of the Earth over short time intervals is justified. This approximate model is obtained by averaging variable parameters that are subject to small variations due to non-stationarity of the perturbing factors. The model is verified and compared with predictions over a long time interval published by the International Earth Rotation and Reference Systems Service (IERS).

  4. Proceedings of the Annual Precise Time and Time Interval (PTTI) applications and Planning Meeting (20th) Held in Vienna, Virginia on 29 November-1 December 1988

    DTIC Science & Technology

    1988-12-01

    PERFORMANCE IN REAL TIME* Dr. James A. Barnes Austron Boulder, Co. Abstract Kalman filters and ARIMA models provide optimum control and evaluation tech...estimates of the model parameters (e.g., the phi’s and theta’s for an ARIMA model ). These model parameters are often evaluated in a batch mode on a...random walk FM, and linear frequency drift. In ARIMA models , this is equivalent to an ARIMA (0,2,2) with a non-zero average sec- ond difference. Using

  5. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-05

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.

  6. Bayesian calibration of the Community Land Model using surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less

  7. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the northeastern China/Korean Peninsula region where average plane-layered structure is well known and relatively laterally homogenous. Secondly, we will consider the Middle East where crustal and upper mantle structure is laterally heterogeneous due to recent and ongoing tectonism. If time allows we will investigate the efficacy of each method for retrieving source parameters from synthetic data generated using a three-dimensional model of seismic structure of the Middle East, where phase delays are known to arise from path-dependent structure.

  8. Comprehensive overview of the Point-by-Point model of prompt emission in fission

    NASA Astrophysics Data System (ADS)

    Tudora, A.; Hambsch, F.-J.

    2017-08-01

    The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for 252Cf(SF) and 235 ( n, f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν (A,TKE) and γ-ray energy E_{γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A ( e.g., ν(A), E_{γ}(A), < ɛ > (A) etc.), as a function of TKE ( e.g., ν (TKE), E_{γ}(TKE)) up to the prompt neutron distribution P (ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning nuclei and incident energies for which the experimental information is completely missing. The PbP treatment can also provide input parameters of the improved Los Alamos model with non-equal residual temperature distributions recently reported by Madland and Kahler, especially for fissioning nuclei without any experimental information concerning the prompt emission.

  9. Two-order-parameter description of liquid Al under five different pressures

    NASA Astrophysics Data System (ADS)

    Li, Y. D.; Hao, Qing-Hai; Cao, Qi-Long; Liu, C. S.

    2008-11-01

    In the present work, using the glue potential, the constant pressure molecular-dynamics simulations of liquid Al under five various pressures and a systematic analysis of the local atomic structures have been performed in order to test the two-order-parameter model proposed by Tanaka [Phys. Rev. Lett. 80, 5750 (1998)] originally for explaining the unusual behaviors of liquid water. The temperature dependence of the bond order parameter Q6 in liquid Al under five different pressures can be well fitted by the functional expression (Q6)/(1-Q6)=Q60exp((ΔE-PΔV)/(kBT)) which produces the energy gain ΔE and the volume change upon the formation of a locally favored structure: ΔE=0.025eV and ΔV=-0.27(Å)3 . ΔE is nearly equal to the difference between the average bond energy of the other type I bonds and the average bond energy of 1551 bonds (characterizing the icosahedronlike local structure); ΔV could be explained as the average volume occupied by one atom in icosahedra minus that occupied by one atom in other structures. With the obtained ΔE and ΔV , it is satisfactorily explained that the density of liquid Al displays a much weaker nonlinear dependence on temperature under lower pressures. So it is demonstrated that the behavior of liquid Al can be well described by the two-order-parameter model.

  10. Uncertainty propagation by using spectral methods: A practical application to a two-dimensional turbulence fluid model

    NASA Astrophysics Data System (ADS)

    Riva, Fabio; Milanese, Lucio; Ricci, Paolo

    2017-10-01

    To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.

  11. [Spatiotemporal variation characteristics and related affecting factors of actual evapotranspiration in the Hun-Taizi River Basin, Northeast China].

    PubMed

    Feng, Xue; Cai, Yan-Cong; Guan, De-Xin; Jin, Chang-Jie; Wang, An-Zhi; Wu, Jia-Bing; Yuan, Feng-Hui

    2014-10-01

    Based on the meteorological and hydrological data from 1970 to 2006, the advection-aridity (AA) model with calibrated parameters was used to calculate evapotranspiration in the Hun-Taizi River Basin in Northeast China. The original parameter of the AA model was tuned according to the water balance method and then four subbasins were selected to validate. Spatiotemporal variation characteristics of evapotranspiration and related affecting factors were analyzed using the methods of linear trend analysis, moving average, kriging interpolation and sensitivity analysis. The results showed that the empirical parameter value of 0.75 of AA model was suitable for the Hun-Taizi River Basin with an error of 11.4%. In the Hun-Taizi River Basin, the average annual actual evapotranspiration was 347.4 mm, which had a slightly upward trend with a rate of 1.58 mm · (10 a(-1)), but did not change significantly. It also indicated that the annual actual evapotranspiration presented a single-peaked pattern and its peak value occurred in July; the evapotranspiration in summer was higher than in spring and autumn, and it was the smallest in winter. The annual average evapotranspiration showed a decreasing trend from the northwest to the southeast in the Hun-Taizi River Basin from 1970 to 2006 with minor differences. Net radiation was largely responsible for the change of actual evapotranspiration in the Hun-Taizi River Basin.

  12. The attenuation of Fourier amplitudes for rock sites in eastern North America

    USGS Publications Warehouse

    Atkinson, Gail M.; Boore, David M.

    2014-01-01

    We develop an empirical model of the decay of Fourier amplitudes for earthquakes of M 3–6 recorded on rock sites in eastern North America and discuss its implications for source parameters. Attenuation at distances from 10 to 500 km may be adequately described using a bilinear model with a geometric spreading of 1/R1.3 to a transition distance of 50 km, with a geometric spreading of 1/R0.5 at greater distances. For low frequencies and distances less than 50 km, the effective geometric spreading given by the model is perturbed using a frequency‐ and hypocentral depth‐dependent factor defined in such a way as to increase amplitudes at lower frequencies near the epicenter but leave the 1 km source amplitudes unchanged. The associated anelastic attenuation is determined for each event, with an average value being given by a regional quality factor of Q=525f 0.45. This model provides a match, on average, between the known seismic moment of events and the inferred low‐frequency spectral amplitudes at R=1  km (obtained by correcting for the attenuation model). The inferred Brune stress parameters from the high‐frequency source terms are about 600 bars (60 MPa), on average, for events of M>4.5.

  13. Improved continuum lowering calculations in screened hydrogenic model with l-splitting for high energy density systems

    NASA Astrophysics Data System (ADS)

    Ali, Amjad; Shabbir Naz, G.; Saleem Shahzad, M.; Kouser, R.; Aman-ur-Rehman; Nasim, M. H.

    2018-03-01

    The energy states of the bound electrons in high energy density systems (HEDS) are significantly affected due to the electric field of the neighboring ions. Due to this effect bound electrons require less energy to get themselves free and move into the continuum. This phenomenon of reduction in potential is termed as ionization potential depression (IPD) or the continuum lowering (CL). The foremost parameter to depict this change is the average charge state, therefore accurate modeling for CL is imperative in modeling atomic data for computation of radiative and thermodynamic properties of HEDS. In this paper, we present an improved model of CL in the screened hydrogenic model with l-splitting (SHML) proposed by G. Faussurier and C. Blancard, P. Renaudin [High Energy Density Physics 4 (2008) 114] and its effect on average charge state. We propose the level charge dependent calculation of CL potential energy and inclusion of exchange and correlation energy in SHML. By doing this, we made our model more relevant to HEDS and free from CL empirical parameter to the plasma environment. We have implemented both original and modified model of SHML in our code named OPASH and benchmark our results with experiments and other state-of-the-art simulation codes. We compared our results of average charge state for Carbon, Beryllium, Aluminum, Iron and Germanium against published literature and found a very reasonable agreement between them.

  14. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  15. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  16. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  17. Global distribution of urban parameters derived from high-resolution global datasets for weather modelling

    NASA Astrophysics Data System (ADS)

    Kawano, N.; Varquez, A. C. G.; Dong, Y.; Kanda, M.

    2016-12-01

    Numerical model such as Weather Research and Forecasting model coupled with single-layer Urban Canopy Model (WRF-UCM) is one of the powerful tools to investigate urban heat island. Urban parameters such as average building height (Have), plain area index (λp) and frontal area index (λf), are necessary inputs for the model. In general, these parameters are uniformly assumed in WRF-UCM but this leads to unrealistic urban representation. Distributed urban parameters can also be incorporated into WRF-UCM to consider a detail urban effect. The problem is that distributed building information is not readily available for most megacities especially in developing countries. Furthermore, acquiring real building parameters often require huge amount of time and money. In this study, we investigated the potential of using globally available satellite-captured datasets for the estimation of the parameters, Have, λp, and λf. Global datasets comprised of high spatial resolution population dataset (LandScan by Oak Ridge National Laboratory), nighttime lights (NOAA), and vegetation fraction (NASA). True samples of Have, λp, and λf were acquired from actual building footprints from satellite images and 3D building database of Tokyo, New York, Paris, Melbourne, Istanbul, Jakarta and so on. Regression equations were then derived from the block-averaging of spatial pairs of real parameters and global datasets. Results show that two regression curves to estimate Have and λf from the combination of population and nightlight are necessary depending on the city's level of development. An index which can be used to decide which equation to use for a city is the Gross Domestic Product (GDP). On the other hand, λphas less dependence on GDP but indicated a negative relationship to vegetation fraction. Finally, a simplified but precise approximation of urban parameters through readily-available, high-resolution global datasets and our derived regressions can be utilized to estimate a global distribution of urban parameters for later incorporation into a weather model, thus allowing us to acquire a global understanding of urban climate (Global Urban Climatology). Acknowledgment: This research was supported by the Environment Research and Technology Development Fund (S-14) of the Ministry of the Environment, Japan.

  18. Temperature and velocity conditions of air flow in vertical channel of hinged ventilated facade of a multistory building.

    NASA Astrophysics Data System (ADS)

    Statsenko, Elena; Ostrovaia, Anastasia; Pigurin, Andrey

    2018-03-01

    This article considers the influence of the building's tallness and the presence of mounting grooved lines on the parameters of heat transfer in the gap of a hinged ventilated facade. A numerical description of the processes occurring in a heat-gravitational flow is given. The average velocity and temperature of the heat-gravitational flow of a structure with open and sealed rusts are determined with unchanged geometric parameters of the gap. The dependence of the parameters influencing the thermomechanical characteristics of the enclosing structure is derived depending on the internal parameters of the system. Physical modeling of real multistory structures is performed by projecting actual parameters onto a reduced laboratory model (scaling).

  19. Variability analysis of SAR from 20 MHz to 2.4 GHz for different adult and child models using finite-difference time-domain

    NASA Astrophysics Data System (ADS)

    Conil, E.; Hadjem, A.; Lacroux, F.; Wong, M. F.; Wiart, J.

    2008-03-01

    This paper deals with the variability of body models used in numerical dosimetry studies. Six adult anthropomorphic voxel models have been collected and used to build 5-, 8- and 12-year-old children using a morphing method respecting anatomical parameters. Finite-difference time-domain calculations of a specific absorption rate (SAR) have been performed for a range of frequencies from 20 MHz to 2.4 GHz for isolated models illuminated by plane waves. A whole-body-averaged SAR is presented as well as the average on specific tissues such as skin, muscles, fat or bones and the average on specific parts of the body such as head, legs, arms or torso. Results point out the variability of adult models. The standard deviation of whole-body-averaged SAR of adult models can reach 40%. All phantoms are exposed to the ICNIRP reference levels. Results show that for adults, compliance with reference levels ensures compliance with basic restrictions, but concerning children models involved in this study, the whole-body-averaged SAR goes over the fundamental safety limits up to 40%. For more information on this article, see medicalphysicsweb.org

  20. Understanding controls of hydrologic processes across two monolithological catchments using model-data integration

    NASA Astrophysics Data System (ADS)

    Xiao, D.; Shi, Y.; Li, L.

    2016-12-01

    Field measurements are important to understand the fluxes of water, energy, sediment, and solute in the Critical Zone however are expensive in time, money, and labor. This study aims to assess the model predictability of hydrological processes in a watershed using information from another intensively-measured watershed. We compare two watersheds of different lithology using national datasets, field measurements, and physics-based model, Flux-PIHM. We focus on two monolithological, forested watersheds under the same climate in the Shale Hills Susquehanna CZO in central Pennsylvania: the Shale-based Shale Hills (SSH, 0.08 km2) and the sandstone-based Garner Run (GR, 1.34 km2). We firstly tested the transferability of calibration coefficients from SSH to GR. We found that without any calibration the model can successfully predict seasonal average soil moisture and discharge which shows the advantage of a physics-based model, however, cannot precisely capture some peaks or the runoff in summer. The model reproduces the GR field data better after calibrating the soil hydrology parameters. In particular, the percentage of sand turns out to be a critical parameter in reproducing data. With sandstone being the dominant lithology, GR has much higher sand percentage than SSH (48.02% vs. 29.01%), leading to higher hydraulic conductivity, lower overall water storage capacity, and in general lower soil moisture. This is consistent with area averaged soil moisture observations using the cosmic-ray soil moisture observing system (COSMOS) at the two sites. This work indicates that some parameters, including evapotranspiration parameters, are transferrable due to similar climatic and land cover conditions. However, the key parameters that control soil moisture, including the sand percentage, need to be recalibrated, reflecting the key role of soil hydrological properties.

  1. Role of dimensionality in Axelrod's model for the dissemination of culture

    NASA Astrophysics Data System (ADS)

    Klemm, Konstantin; Eguíluz, Víctor M.; Toral, Raúl; Miguel, Maxi San

    2003-09-01

    We analyze a model of social interaction in one- and two-dimensional lattices for a moderate number of features. We introduce an order parameter as a function of the overlap between neighboring sites. In a one-dimensional chain, we observe that the dynamics is consistent with a second-order transition, where the order parameter changes continuously and the average domain diverges at the transition point. However, in a two-dimensional lattice the order parameter is discontinuous at the transition point characteristic of a first-order transition between an ordered and a disordered state.

  2. Quantification of the impact of precipitation spatial distribution uncertainty on predictive uncertainty of a snowmelt runoff model

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.

    2012-04-01

    This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed. In this study, first order and total effects of the group of precipitation factors FP1- FP4, and the precipitation factor FP5, are calculated separately. First order and total effects of the group FP1- FP4 are much higher than first order and total effects of the factor FP5, which are negligible This situation is due to the fact that the actual value taken by FP5 does not have much influence in the contribution of the glacier zone to the catchment's output discharge, mainly limited by incident solar radiation. In addition to this, first order effects indicate that, in average, nearly 25% of predictive uncertainty could be reduced if the true values of the precipitation factors FPi could be known, but no information was available on the appropriate values for the remaining model parameters. Finally, the total effects of the precipitation factors FP1- FP4 are close to 41% in average, implying that even if the appropriate values for the remaining model parameters could be fixed, predictive uncertainty would be still quite high if the spatial distribution of precipitation remains unknown. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.

  3. Enhancement of multimodality texture-based prediction models via optimization of PET and MR image acquisition protocols: a proof of concept

    NASA Astrophysics Data System (ADS)

    Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam

    2017-11-01

    Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.

  4. Analytical techniques for the study of some parameters of multispectral scanner systems for remote sensing

    NASA Technical Reports Server (NTRS)

    Wiswell, E. R.; Cooper, G. R. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The concept of average mutual information in the received spectral random process about the spectral scene was developed. Techniques amenable to implementation on a digital computer were also developed to make the required average mutual information calculations. These techniques required identification of models for the spectral response process of scenes. Stochastic modeling techniques were adapted for use. These techniques were demonstrated on empirical data from wheat and vegetation scenes.

  5. Chaos control of Hastings–Powell model by combining chaotic motions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danca, Marius-F., E-mail: danca@rist.ro; Chattopadhyay, Joydev, E-mail: joydev@isical.ac.in

    2016-04-15

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings–Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can bemore » approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: “losing + losing = winning.” If “loosing” is replaced with “chaos” and, “winning” with “order” (as the opposite to “chaos”), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write “chaos + chaos = regular.” Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.« less

  6. A review of US anthropometric reference data (1971 2000) with comparisons to both stylized and tomographic anatomic models

    NASA Astrophysics Data System (ADS)

    Huh, C.; Bolch, W. E.

    2003-10-01

    Two classes of anatomic models currently exist for use in both radiation protection and radiation dose reconstruction: stylized mathematical models and tomographic voxel models. The former utilize 3D surface equations to represent internal organ structure and external body shape, while the latter are based on segmented CT or MR images of a single individual. While tomographic models are clearly more anthropomorphic than stylized models, a given model's characterization as being anthropometric is dependent upon the reference human to which the model is compared. In the present study, data on total body mass, standing/sitting heights and body mass index are collected and reviewed for the US population covering the time interval from 1971 to 2000. These same anthropometric parameters are then assembled for the ORNL series of stylized models, the GSF series of tomographic models (Golem, Helga, Donna, etc), the adult male Zubal tomographic model and the UF newborn tomographic model. The stylized ORNL models of the adult male and female are found to be fairly representative of present-day average US males and females, respectively, in terms of both standing and sitting heights for ages between 20 and 60-80 years. While the ORNL adult male model provides a reasonably close match to the total body mass of the average US 21-year-old male (within ~5%), present-day 40-year-old males have an average total body mass that is ~16% higher. For radiation protection purposes, the use of the larger 73.7 kg adult ORNL stylized hermaphrodite model provides a much closer representation of average present-day US females at ages ranging from 20 to 70 years. In terms of the adult tomographic models from the GSF series, only Donna (40-year-old F) closely matches her age-matched US counterpart in terms of average body mass. Regarding standing heights, the better matches to US age-correlated averages belong to Irene (32-year-old F) for the females and Golem (38-year-old M) for the males. Both Helga (27-year-old F) and Donna, however, provide good matches to average US sitting heights for adult females, while Golem and Otoko (male of unknown age) yield sitting heights that are slightly below US adult male averages. Finally, Helga is seen as the only GSF tomographic female model that yields a body mass index in line with her average US female counterpart at age 26. In terms of dose reconstruction activities, however, all current tomographic voxel models are valuable assets in attempting to cover the broad distribution of individual anthropometric parameters representative of the current US population. It is highly recommended that similar attempts to create a broad library of tomographic models be initiated in the United States and elsewhere to complement and extend the limited number of tomographic models presently available for these efforts.

  7. Event-based stormwater management pond runoff temperature model

    NASA Astrophysics Data System (ADS)

    Sabouri, F.; Gharabaghi, B.; Sattar, A. M. A.; Thompson, A. M.

    2016-09-01

    Stormwater management wet ponds are generally very shallow and hence can significantly increase (about 5.4 °C on average in this study) runoff temperatures in summer months, which adversely affects receiving urban stream ecosystems. This study uses gene expression programming (GEP) and artificial neural networks (ANN) modeling techniques to advance our knowledge of the key factors governing thermal enrichment effects of stormwater ponds. The models developed in this study build upon and compliment the ANN model developed by Sabouri et al. (2013) that predicts the catchment event mean runoff temperature entering the pond as a function of event climatic and catchment characteristic parameters. The key factors that control pond outlet runoff temperature, include: (1) Upland Catchment Parameters (catchment drainage area and event mean runoff temperature inflow to the pond); (2) Climatic Parameters (rainfall depth, event mean air temperature, and pond initial water temperature); and (3) Pond Design Parameters (pond length-to-width ratio, pond surface area, pond average depth, and pond outlet depth). We used monitoring data for three summers from 2009 to 2011 in four stormwater management ponds, located in the cities of Guelph and Kitchener, Ontario, Canada to develop the models. The prediction uncertainties of the developed ANN and GEP models for the case study sites are around 0.4% and 1.7% of the median value. Sensitivity analysis of the trained models indicates that the thermal enrichment of the pond outlet runoff is inversely proportional to pond length-to-width ratio, pond outlet depth, and directly proportional to event runoff volume, event mean pond inflow runoff temperature, and pond initial water temperature.

  8. Nonequilibrium critical dynamics of the two-dimensional Ashkin-Teller model at the Baxter line

    NASA Astrophysics Data System (ADS)

    Fernandes, H. A.; da Silva, R.; Caparica, A. A.; de Felício, J. R. Drugowich

    2017-04-01

    We investigate the short-time universal behavior of the two-dimensional Ashkin-Teller model at the Baxter line by performing time-dependent Monte Carlo simulations. First, as preparatory results, we obtain the critical parameters by searching the optimal power-law decay of the magnetization. Thus, the dynamic critical exponents θm and θp, related to the magnetic and electric order parameters, as well as the persistence exponent θg, are estimated using heat-bath Monte Carlo simulations. In addition, we estimate the dynamic exponent z and the static critical exponents β and ν for both order parameters. We propose a refined method to estimate the static exponents that considers two different averages: one that combines an internal average using several seeds with another, which is taken over temporal variations in the power laws. Moreover, we also performed the bootstrapping method for a complementary analysis. Our results show that the ratio β /ν exhibits universal behavior along the critical line corroborating the conjecture for both magnetization and polarization.

  9. Sensitivity and spin-up times of cohesive sediment transport models used to simulate bathymetric change: Chapter 31

    USGS Publications Warehouse

    Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.

    2008-01-01

    Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged box model was calibrated to bathymetric change data and shows rapidly evolving bathymetry in the first 10-20 years, though sediment supply and hydrodynamic forcing did not vary greatly. This initial burst of bathymetric change is believed to be model adjustment to initial conditions, and suggests a spin-up time of greater than 10 years. These three diverse modeling approaches reinforce the sensitivity of cohesive sediment transport models to initial conditions and model parameters, and highlight the importance of appropriate calibration data. Adequate spin-up time of the order of years is required to initialize models, otherwise the solution will contain bathymetric change that is not due to environmental forcings, but rather improper specification of initial conditions and model parameters. Temporally intensive bathymetric change data can assist in determining initial conditions and parameters, provided they are available. Computational effort may be reduced by selectively updating hydrodynamics and bathymetry, thereby allowing time for spin-up periods. reserved.

  10. Pseudo-conformer models for linear molecules: Joint treatment of spectroscopic, electron diffraction and ab initio data for the C3O2 molecule

    NASA Astrophysics Data System (ADS)

    Tarasov, Yury I.; Kochikov, Igor V.

    2018-06-01

    Dynamic analysis of the molecules with large-amplitude motions (LAM) based on the pseudo-conformer approach has been successfully applied to various molecules. Floppy linear molecules present a special class of molecular structures that possess a pair of conjugate LAM coordinates but allow one-dimensional treatment. In this paper, previously developed treatment for the semirigid molecules is applied to the carbon suboxide molecule. This molecule characterized by the extremely large CCC bending has been thoroughly investigated by spectroscopic and ab initio methods. However, the earlier electron diffraction investigations were performed within a static approach, obtaining thermally averaged parameters. In this paper we apply a procedure aimed at obtaining the short list of self-consistent reference geometry parameters of a molecule, while all thermally averaged parameters are calculated based on reference geometry, relaxation dependencies and quadratic and cubic force constants. We show that such a model satisfactorily describes available electron diffraction evidence with various QC bending potential energy functions when r.m.s. CCC angle is in the interval 151 ± 2°. This leads to a self-consistent molecular model satisfying spectroscopic and GED data. The parameters for linear reference geometry have been defined as re(CO) = 1.161(2) Å and re(CC) = 1.273(2) Å.

  11. Fine bakery wares with label claims in Europe and their categorisation by nutrient profiling models.

    PubMed

    Trichterborn, J; Harzer, G; Kunz, C

    2011-03-01

    This study assesses a range of commercially available fine bakery wares with nutrition or health related on-pack communication against the criteria of selected nutrient profiling models. Different purposes of the application of nutrient profiles were considered, including front-of-pack signposting and the regulation of claims or advertising. More than 200 commercially available fine bakery wares carrying claims were identified in Germany, France, Spain, Sweden and United Kingdom and evaluated against five nutrient profiling models. All models were assessed regarding their underlying principles, generated results and inter-model agreement levels. Total energy, saturated fatty acids, sugars, sodium and fibre were critical parameters for the categorisation of products. The Choices Programme was the most restrictive model in this category, while the Food and Drug Administration model allowed the highest number of products to qualify. According to all models, more savoury than sweet products met the criteria. On average, qualifying products contained less than half the amounts of nutrients to limit and more than double the amount of fibre compared with all the products in the study. None of the models had a significant impact on the average energy contents. Nutrient profiles can be applied to identify fine bakery wares with a significantly better nutritional composition than the average range of products positioned as healthier. Important parameters to take into account include energy, saturated fatty acids, sugars, sodium and fibre. Different criteria sets for subcategories of fine bakery wares do not seem necessary.

  12. Relating multifrequency radar backscattering to forest biomass: Modeling and AIRSAR measurement

    NASA Technical Reports Server (NTRS)

    Sun, Guo-Qing; Ranson, K. Jon

    1992-01-01

    During the last several years, significant efforts in microwave remote sensing were devoted to relating forest parameters to radar backscattering coefficients. These and other studies showed that in most cases, the longer wavelength (i.e. P band) and cross-polarization (HV) backscattering had higher sensitivity and better correlation to forest biomass. This research examines this relationship in a northern forest area through both backscatter modeling and synthetic aperture radar (SAR) data analysis. The field measurements were used to estimate stand biomass from forest weight tables. The backscatter model described by Sun et al. was modified to simulate the backscattering coefficients with respect to stand biomass. The average number of trees per square meter or radar resolution cell, and the average tree height or diameter breast height (dbh) in the forest stand are the driving parameters of the model. The rest of the soil surface, orientation, and size distributions of leaves and branches, remain unchanged in the simulations.

  13. Statistical Inference of a RANS closure for a Jet-in-Crossflow simulation

    NASA Astrophysics Data System (ADS)

    Heyse, Jan; Edeling, Wouter; Iaccarino, Gianluca

    2016-11-01

    The jet-in-crossflow is found in several engineering applications, such as discrete film cooling for turbine blades, where a coolant injected through hols in the blade's surface protects the component from the hot gases leaving the combustion chamber. Experimental measurements using MRI techniques have been completed for a single hole injection into a turbulent crossflow, providing full 3D averaged velocity field. For such flows of engineering interest, Reynolds-Averaged Navier-Stokes (RANS) turbulence closure models are often the only viable computational option. However, RANS models are known to provide poor predictions in the region close to the injection point. Since these models are calibrated on simple canonical flow problems, the obtained closure coefficient estimates are unlikely to extrapolate well to more complex flows. We will therefore calibrate the parameters of a RANS model using statistical inference techniques informed by the experimental jet-in-crossflow data. The obtained probabilistic parameter estimates can in turn be used to compute flow fields with quantified uncertainty. Stanford Graduate Fellowship in Science and Engineering.

  14. Co-pyrolysis kinetics of sewage sludge and bagasse using multiple normal distributed activation energy model (M-DAEM).

    PubMed

    Lin, Yan; Chen, Zhihao; Dai, Minquan; Fang, Shiwen; Liao, Yanfen; Yu, Zhaosheng; Ma, Xiaoqian

    2018-07-01

    In this study, the kinetic models of bagasse, sewage sludge and their mixture were established by the multiple normal distributed activation energy model. Blending with sewage sludge, the initial temperature declined from 437 K to 418 K. The pyrolytic species could be divided into five categories, including analogous hemicelluloses I, hemicelluloses II, cellulose, lignin and bio-char. In these species, the average activation energies and the deviations situated at reasonable ranges of 166.4673-323.7261 kJ/mol and 0.1063-35.2973 kJ/mol, respectively, which were conformed to the references. The kinetic models were well matched to experimental data, and the R 2 were greater than 99.999%y. In the local sensitivity analysis, the distributed average activation energy had stronger effect on the robustness than other kinetic parameters. And the content of pyrolytic species determined which series of kinetic parameters were more important. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Turbulent Flow and Sand Dune Dynamics: Identifying Controls on Aeolian Sediment Transport

    NASA Astrophysics Data System (ADS)

    Weaver, C. M.; Wiggs, G.

    2007-12-01

    Sediment transport models are founded on cubic power relationships between the transport rate and time averaged flow parameters. These models have achieved limited success and recent aeolian and fluvial research has focused on the modelling and measurement of sediment transport by temporally varying flow conditions. Studies have recognised turbulence as a driving force in sediment transport and have highlighted the importance of coherent flow structures in sediment transport systems. However, the exact mechanisms are still unclear. Furthermore, research in the fluvial environment has identified the significance of turbulent structures for bedform morphology and spacing. However, equivalent research in the aeolian domain is absent. This paper reports the findings of research carried out to characterise the importance of turbulent flow parameters in aeolian sediment transport and determine how turbulent energy and turbulent structures change in response to dune morphology. The relative importance of mean and turbulent wind parameters on aeolian sediment flux was examined in the Skeleton Coast, Namibia. Measurements of wind velocity (using sonic anemometers) and sand transport (using grain impact sensors) at a sampling frequency of 10 Hz were made across a flat surface and along transects on a 9 m high barchan dune. Mean wind parameters and mass sand flux were measured using cup anemometers and wedge-shaped sand traps respectively. Vertical profile data from the sonic anemometers were used to compute turbulence and turbulent stress (Reynolds stress; instantaneous horizontal and vertical fluctuations; coherent flow structures) and their relationship with respect to sand transport and evolving dune morphology. On the flat surface time-averaged parameters generally fail to characterise sand transport dynamics, particularly as the averaging interval is reduced. However, horizontal wind speed correlates well with sand transport even with short averaging times. Quadrant analysis revealed that turbulent events with a positive horizontal component, such as sweeps and outward interactions, were responsible for the majority of sand transport. On the dune surface results demonstrate the development and modification of turbulence and sediment flux in key regions: toe, crest and brink. Analysis suggests that these modifications are directly controlled by streamline curvature and flow acceleration. Conflicting models of dune development, morphology and stability arise when based upon either the dynamics of measured turbulent flow or mean flow.

  16. ARMA models for earthquake ground motions. Seismic safety margins research program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.

    1981-02-01

    Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less

  17. Deformation of a plate with periodically changing parameters

    NASA Astrophysics Data System (ADS)

    Naumova, Natalia V.; Ivanov, Denis; Voloshinova, Tatiana

    2018-05-01

    Deformation of reinforced square plate under external pressure is considered. The averaged fourth-order partial differential equation for the plate deflection w is obtained. The new mathematical model of the plate is offered. Asymptotic averaging and Finite Elements Method (ANSYS) are used to get the values of normal deflections of the plate surface. The comparison of numerical and asymptotic results is performed.

  18. [Individual growth modeling of the penshell Atrina maura (Bivalvia: Pinnidae) using a multi model inference approach].

    PubMed

    Aragón-Noriega, Eugenio Alberto

    2013-09-01

    Growth models of marine animals, for fisheries and/or aquaculture purposes, are based on the popular von Bertalanffy model. This tool is mostly used because its parameters are used to evaluate other fisheries models, such as yield per recruit; nevertheless, there are other alternatives (such as Gompertz, Logistic, Schnute) not yet used by fishery scientists, that may result useful depending on the studied species. The penshell Atrina maura, has been studied for fisheries or aquaculture supplies, but its individual growth has not yet been studied before. The aim of this study was to model the absolute growth of the penshell A. maura using length-age data. For this, five models were assessed to obtain growth parameters: von Bertalanffy, Gompertz, Logistic, Schnute case 1 and Schnute and Richards. The criterion used to select the best models was the Akaike information criterion, as well as the residual squared sum and R2 adjusted. To get the average asymptotic length, the multi model inference approach was used. According to Akaike information criteria, the Gompertz model better described the absolute growth of A. maura. Following the multi model inference approach the average asymptotic shell length was 218.9 mm (IC 212.3-225.5) of shell length. I concluded that the use of the multi model approach and the Akaike information criteria represented the most robust method for growth parameter estimation of A. maura and the von Bertalanffy growth model should not be selected a priori as the true model to obtain the absolute growth in bivalve mollusks like in the studied species in this paper.

  19. Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.

  20. Precipitation data considerations for evaluating subdaily changes in rainless periods due to climate change

    USDA-ARS?s Scientific Manuscript database

    Quantifying magnitudes and frequencies of rainless times between storms (TBS), or storm occurrence, is required for generating continuous sequences of precipitation for modeling inputs to small watershed models for conservation studies. Two parameters characterize TBS, minimum TBS (MTBS) and averag...

  1. Spatial averaging of a dissipative particle dynamics model for active suspensions

    NASA Astrophysics Data System (ADS)

    Panchenko, Alexander; Hinz, Denis F.; Fried, Eliot

    2018-03-01

    Starting from a fine-scale dissipative particle dynamics (DPD) model of self-motile point particles, we derive meso-scale continuum equations by applying a spatial averaging version of the Irving-Kirkwood-Noll procedure. Since the method does not rely on kinetic theory, the derivation is valid for highly concentrated particle systems. Spatial averaging yields stochastic continuum equations similar to those of Toner and Tu. However, our theory also involves a constitutive equation for the average fluctuation force. According to this equation, both the strength and the probability distribution vary with time and position through the effective mass density. The statistics of the fluctuation force also depend on the fine scale dissipative force equation, the physical temperature, and two additional parameters which characterize fluctuation strengths. Although the self-propulsion force entering our DPD model contains no explicit mechanism for aligning the velocities of neighboring particles, our averaged coarse-scale equations include the commonly encountered cubically nonlinear (internal) body force density.

  2. A new type of exact arbitrarily inhomogeneous cosmology: evolution of deceleration in the flat homogeneous-on-average case

    NASA Astrophysics Data System (ADS)

    Hellaby, Charles

    2012-01-01

    A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.

  3. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  4. Upscaled soil-water retention using van Genuchten's function

    USGS Publications Warehouse

    Green, T.R.; Constantz, J.E.; Freyberg, D.L.

    1996-01-01

    Soils are often layered at scales smaller than the block size used in numerical and conceptual models of variably saturated flow. Consequently, the small-scale variability in water content within each block must be homogenized (upscaled). Laboratory results have shown that a linear volume average (LVA) of water content at a uniform suction is a good approximation to measured water contents in heterogeneous cores. Here, we upscale water contents using van Genuchten's function for both the local and upscaled soil-water-retention characteristics. The van Genuchten (vG) function compares favorably with LVA results, laboratory experiments under hydrostatic conditions in 3-cm cores, and numerical simulations of large-scale gravity drainage. Our method yields upscaled vG parameter values by fitting the vG curve to the LVA of water contents at various suction values. In practice, it is more efficient to compute direct averages of the local vG parameter values. Nonlinear power averages quantify a feasible range of values for each upscaled vG shape parameter; upscaled values of N are consistently less than the harmonic means, reflecting broad pore-size distributions of the upscaled soils. The vG function is useful for modeling soil-water retention at large scales, and these results provide guidance for its application.

  5. The Slip Behavior and Source Parameters for Spontaneous Slip Events on Rough Faults Subjected to Slow Tectonic Loading

    NASA Astrophysics Data System (ADS)

    Tal, Yuval; Hager, Bradford H.

    2018-02-01

    We study the response to slow tectonic loading of rough faults governed by velocity weakening rate and state friction, using a 2-D plane strain model. Our numerical approach accounts for all stages in the seismic cycle, and in each simulation we model a sequence of two earthquakes or more. We focus on the global behavior of the faults and find that as the roughness amplitude, br, increases and the minimum wavelength of roughness decreases, there is a transition from seismic slip to aseismic slip, in which the load on the fault is released by more slip events but with lower slip rate, lower seismic moment per unit length, M0,1d, and lower average static stress drop on the fault, Δτt. Even larger decreases with roughness are observed when these source parameters are estimated only for the dynamic stage of the rupture. For br ≤ 0.002, the source parameters M0,1d and Δτt decrease mutually and the relationship between Δτt and the average fault strain is similar to that of a smooth fault. For faults with larger values of br that are completely ruptured during the slip events, the average fault strain generally decreases more rapidly with roughness than Δτt.

  6. Estimation of genetic parameters for milk yield in Murrah buffaloes by Bayesian inference.

    PubMed

    Breda, F C; Albuquerque, L G; Euclydes, R F; Bignardi, A B; Baldi, F; Torres, R A; Barbosa, L; Tonhati, H

    2010-02-01

    Random regression models were used to estimate genetic parameters for test-day milk yield in Murrah buffaloes using Bayesian inference. Data comprised 17,935 test-day milk records from 1,433 buffaloes. Twelve models were tested using different combinations of third-, fourth-, fifth-, sixth-, and seventh-order orthogonal polynomials of weeks of lactation for additive genetic and permanent environmental effects. All models included the fixed effects of contemporary group, number of daily milkings and age of cow at calving as covariate (linear and quadratic effect). In addition, residual variances were considered to be heterogeneous with 6 classes of variance. Models were selected based on the residual mean square error, weighted average of residual variance estimates, and estimates of variance components, heritabilities, correlations, eigenvalues, and eigenfunctions. Results indicated that changes in the order of fit for additive genetic and permanent environmental random effects influenced the estimation of genetic parameters. Heritability estimates ranged from 0.19 to 0.31. Genetic correlation estimates were close to unity between adjacent test-day records, but decreased gradually as the interval between test-days increased. Results from mean squared error and weighted averages of residual variance estimates suggested that a model considering sixth- and seventh-order Legendre polynomials for additive and permanent environmental effects, respectively, and 6 classes for residual variances, provided the best fit. Nevertheless, this model presented the largest degree of complexity. A more parsimonious model, with fourth- and sixth-order polynomials, respectively, for these same effects, yielded very similar genetic parameter estimates. Therefore, this last model is recommended for routine applications. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Aerobic composting of waste activated sludge: Kinetic analysis for microbiological reaction and oxygen consumption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Y.; Kawase, Y.

    2006-07-01

    In order to examine the optimal design and operating parameters, kinetics for microbiological reaction and oxygen consumption in composting of waste activated sludge were quantitatively examined. A series of experiments was conducted to discuss the optimal operating parameters for aerobic composting of waste activated sludge obtained from Kawagoe City Wastewater Treatment Plant (Saitama, Japan) using 4 and 20 L laboratory scale bioreactors. Aeration rate, compositions of compost mixture and height of compost pile were investigated as main design and operating parameters. The optimal aerobic composting of waste activated sludge was found at the aeration rate of 2.0 L/min/kg (initial compostingmore » mixture dry weight). A compost pile up to 0.5 m could be operated effectively. A simple model for composting of waste activated sludge in a composting reactor was developed by assuming that a solid phase of compost mixture is well mixed and the kinetics for microbiological reaction is represented by a Monod-type equation. The model predictions could fit the experimental data for decomposition of waste activated sludge with an average deviation of 2.14%. Oxygen consumption during composting was also examined using a simplified model in which the oxygen consumption was represented by a Monod-type equation and the axial distribution of oxygen concentration in the composting pile was described by a plug-flow model. The predictions could satisfactorily simulate the experiment results for the average maximum oxygen consumption rate during aerobic composting with an average deviation of 7.4%.« less

  8. Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.

    2015-11-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.

  9. Seizure prediction in hippocampal and neocortical epilepsy using a model-based approach

    PubMed Central

    Aarabi, Ardalan; He, Bin

    2014-01-01

    Objectives The aim of this study is to develop a model based seizure prediction method. Methods A neural mass model was used to simulate the macro-scale dynamics of intracranial EEG data. The model was composed of pyramidal cells, excitatory and inhibitory interneurons described through state equations. Twelve model’s parameters were estimated by fitting the model to the power spectral density of intracranial EEG signals and then integrated based on information obtained by investigating changes in the parameters prior to seizures. Twenty-one patients with medically intractable hippocampal and neocortical focal epilepsy were studied. Results Tuned to obtain maximum sensitivity, an average sensitivity of 87.07% and 92.6% with an average false prediction rate of 0.2 and 0.15/h were achieved using maximum seizure occurrence periods of 30 and 50 min and a minimum seizure prediction horizon of 10 s, respectively. Under maximum specificity conditions, the system sensitivity decreased to 82.9% and 90.05% and the false prediction rates were reduced to 0.16 and 0.12/h using maximum seizure occurrence periods of 30 and 50 min, respectively. Conclusions The spatio-temporal changes in the parameters demonstrated patient-specific preictal signatures that could be used for seizure prediction. Significance The present findings suggest that the model-based approach may aid prediction of seizures. PMID:24374087

  10. The efficacy of calibrating hydrologic model using remotely sensed evapotranspiration and soil moisture for streamflow prediction

    NASA Astrophysics Data System (ADS)

    Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.

    2016-04-01

    Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.

  11. Ablation dynamics - from absorption to heat accumulation/ultra-fast laser matter interaction

    NASA Astrophysics Data System (ADS)

    Kramer, Thorsten; Remund, Stefan; Jäggi, Beat; Schmid, Marc; Neuenschwander, Beat

    2018-05-01

    Ultra-short laser radiation is used in manifold industrial applications today. Although state-of-the-art laser sources are providing an average power of 10-100 W with repetition rates of up to several megahertz, most applications do not benefit from it. On the one hand, the processing speed is limited to some hundred millimeters per second by the dynamics of mechanical axes or galvanometric scanners. On the other hand, high repetition rates require consideration of new physical effects such as heat accumulation and shielding that might reduce the process efficiency. For ablation processes, process efficiency can be expressed by the specific removal rate, ablated volume per time, and average power. The analysis of the specific removal rate for different laser parameters, like average power, repetition rate or pulse duration, and process parameters, like scanning speed or material, can be used to find the best operation point for microprocessing applications. Analytical models and molecular dynamics simulations based on the so-called two-temperature model reveal the causes for the appearance of limiting physical effects. The findings of models and simulations can be used to take advantage and optimize processing strategies.

  12. The predicted influence of climate change on lesser prairie-chicken reproductive parameters

    USGS Publications Warehouse

    Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, D.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.

    2013-01-01

    The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Nina events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.

  13. The predicted influence of climate change on lesser prairie-chicken reproductive parameters.

    PubMed

    Grisham, Blake A; Boal, Clint W; Haukos, David A; Davis, Dawn M; Boydston, Kathy K; Dixon, Charles; Heck, Willard R

    2013-01-01

    The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.

  14. Rethinking CMB foregrounds: systematic extension of foreground parametrizations

    NASA Astrophysics Data System (ADS)

    Chluba, Jens; Hill, James Colin; Abitbol, Maximilian H.

    2017-11-01

    Future high-sensitivity measurements of the cosmic microwave background (CMB) anisotropies and energy spectrum will be limited by our understanding and modelling of foregrounds. Not only does more information need to be gathered and combined, but also novel approaches for the modelling of foregrounds, commensurate with the vast improvements in sensitivity, have to be explored. Here, we study the inevitable effects of spatial averaging on the spectral shapes of typical foreground components, introducing a moment approach, which naturally extends the list of foreground parameters that have to be determined through measurements or constrained by theoretical models. Foregrounds are thought of as a superposition of individual emitting volume elements along the line of sight and across the sky, which then are observed through an instrumental beam. The beam and line-of-sight averages are inevitable. Instead of assuming a specific model for the distributions of physical parameters, our method identifies natural new spectral shapes for each foreground component that can be used to extract parameter moments (e.g. mean, dispersion, cross terms, etc.). The method is illustrated for the superposition of power laws, free-free spectra, grey-body and modified blackbody spectra, but can be applied to more complicated fundamental spectral energy distributions. Here, we focus on intensity signals but the method can be extended to the case of polarized emission. The averaging process automatically produces scale-dependent spectral shapes and the moment method can be used to propagate the required information across scales in power spectrum estimates. The approach is not limited to applications to CMB foregrounds, but could also be useful for the modelling of X-ray emission in clusters of galaxies.

  15. Modelling of the combustion velocity in UIT-85 on sustainable alternative gas fuel

    NASA Astrophysics Data System (ADS)

    Smolenskaya, N. M.; Korneev, N. V.

    2017-05-01

    The flame propagation velocity is one of the determining parameters characterizing the intensity of combustion process in the cylinder of an engine with spark ignition. Strengthening of requirements for toxicity and efficiency of the ICE contributes to gradual transition to sustainable alternative fuels, which include the mixture of natural gas with hydrogen. Currently, studies of conditions and regularities of combustion of this fuel to improve efficiency of its application are carried out in many countries. Therefore, the work is devoted to modeling the average propagation velocities of natural gas flame front laced with hydrogen to 15% by weight of the fuel, and determining the possibility of assessing the heat release characteristics on the average velocities of the flame front propagation in the primary and secondary phases of combustion. Experimental studies, conducted the on single cylinder universal installation UIT-85, showed the presence of relationship of the heat release characteristics with the parameters of the flame front propagation. Based on the analysis of experimental data, the empirical dependences for determination of average velocities of flame front propagation in the first and main phases of combustion, taking into account the change in various parameters of engine operation with spark ignition, were obtained. The obtained results allow to determine the characteristics of heat dissipation and to assess the impact of addition of hydrogen to the natural gas combustion process, that is needed to identify ways of improvement of the combustion process efficiency, including when you change the throttling parameters.

  16. Estimation of real-time runway surface contamination using flight data recorder parameters

    NASA Astrophysics Data System (ADS)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.

  17. The Rangeland Hydrology and Erosion Model: A Dynamic Approach for Predicting Soil Loss on Rangelands

    NASA Astrophysics Data System (ADS)

    Hernandez, Mariano; Nearing, Mark A.; Al-Hamdan, Osama Z.; Pierson, Frederick B.; Armendariz, Gerardo; Weltz, Mark A.; Spaeth, Kenneth E.; Williams, C. Jason; Nouwakpo, Sayjro K.; Goodrich, David C.; Unkrich, Carl L.; Nichols, Mary H.; Holifield Collins, Chandra D.

    2017-11-01

    In this study, we present the improved Rangeland Hydrology and Erosion Model (RHEM V2.3), a process-based erosion prediction tool specific for rangeland application. The article provides the mathematical formulation of the model and parameter estimation equations. Model performance is assessed against data collected from 23 runoff and sediment events in a shrub-dominated semiarid watershed in Arizona, USA. To evaluate the model, two sets of primary model parameters were determined using the RHEM V2.3 and RHEM V1.0 parameter estimation equations. Testing of the parameters indicated that RHEM V2.3 parameter estimation equations provided a 76% improvement over RHEM V1.0 parameter estimation equations. Second, the RHEM V2.3 model was calibrated to measurements from the watershed. The parameters estimated by the new equations were within the lowest and highest values of the calibrated parameter set. These results suggest that the new parameter estimation equations can be applied for this environment to predict sediment yield at the hillslope scale. Furthermore, we also applied the RHEM V2.3 to demonstrate the response of the model as a function of foliar cover and ground cover for 124 data points across Arizona and New Mexico. The dependence of average sediment yield on surface ground cover was moderately stronger than that on foliar cover. These results demonstrate that RHEM V2.3 predicts runoff volume, peak runoff, and sediment yield with sufficient accuracy for broad application to assess and manage rangeland systems.

  18. A Smoluchowski model of crystallization dynamics of small colloidal clusters

    NASA Astrophysics Data System (ADS)

    Beltran-Villegas, Daniel J.; Sehgal, Ray M.; Maroudas, Dimitrios; Ford, David M.; Bevan, Michael A.

    2011-10-01

    We investigate the dynamics of colloidal crystallization in a 32-particle system at a fixed value of interparticle depletion attraction that produces coexisting fluid and solid phases. Free energy landscapes (FELs) and diffusivity landscapes (DLs) are obtained as coefficients of 1D Smoluchowski equations using as order parameters either the radius of gyration or the average crystallinity. FELs and DLs are estimated by fitting the Smoluchowski equations to Brownian dynamics (BD) simulations using either linear fits to locally initiated trajectories or global fits to unbiased trajectories using Bayesian inference. The resulting FELs are compared to Monte Carlo Umbrella Sampling results. The accuracy of the FELs and DLs for modeling colloidal crystallization dynamics is evaluated by comparing mean first-passage times from BD simulations with analytical predictions using the FEL and DL models. While the 1D models accurately capture dynamics near the free energy minimum fluid and crystal configurations, predictions near the transition region are not quantitatively accurate. A preliminary investigation of ensemble averaged 2D order parameter trajectories suggests that 2D models are required to capture crystallization dynamics in the transition region.

  19. The fatigue life study of polyphenylene sulfide composites filled with continuous glass fibers

    NASA Astrophysics Data System (ADS)

    Ye, Junjie; Hong, Yun; Wang, Yongkun; Zhai, Zhi; Shi, Baoquan; Chen, Xuefeng

    2018-04-01

    In this study, an effective microscopic model is proposed to investigate the fatigue life of composites containing continuous glass fibers, which is surrounded by polyphenylene sulfide (PPS) matrix materials. The representative volume element is discretized by parametric elements. Moreover, the microscopic model is established by employing the relation between average surface displacements and average surface tractions. Based on the experimental data, the required fatigue failure parameters of the PPS are determined. Two different fiber arrangements are considered for comparisons. Numerical analyses indicated that the square edge packing provides a more accuracy. In addition, microscopic structural parameters (fiber volume fraction, fiber off-axis angle) effect on the fatigue life of Glass/PPS composites is further discussed. It is revealed that fiber strength degradation effects on the fatigue life of continuous fiber-reinforced composites can be ignored.

  20. Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.

    2013-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.

  1. A Stochastic Fractional Dynamics Model of Space-time Variability of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Travis, James E.

    2013-01-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.

  2. A Gaussian wave packet phase-space representation of quantum canonical statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coughtrie, David J.; Tew, David P.

    2015-07-28

    We present a mapping of quantum canonical statistical averages onto a phase-space average over thawed Gaussian wave-packet (GWP) parameters, which is exact for harmonic systems at all temperatures. The mapping invokes an effective potential surface, experienced by the wave packets, and a temperature-dependent phase-space integrand, to correctly transition from the GWP average at low temperature to classical statistics at high temperature. Numerical tests on weakly and strongly anharmonic model systems demonstrate that thermal averages of the system energy and geometric properties are accurate to within 1% of the exact quantum values at all temperatures.

  3. K-ε Turbulence Model Parameter Estimates Using an Approximate Self-similar Jet-in-Crossflow Solution

    DOE PAGES

    DeChant, Lawrence; Ray, Jaideep; Lefantzi, Sophia; ...

    2017-06-09

    The k-ε turbulence model has been described as perhaps “the most widely used complete turbulence model.” This family of heuristic Reynolds Averaged Navier-Stokes (RANS) turbulence closures is supported by a suite of model parameters that have been estimated by demanding the satisfaction of well-established canonical flows such as homogeneous shear flow, log-law behavior, etc. While this procedure does yield a set of so-called nominal parameters, it is abundantly clear that they do not provide a universally satisfactory turbulence model that is capable of simulating complex flows. Recent work on the Bayesian calibration of the k-ε model using jet-in-crossflow wind tunnelmore » data has yielded parameter estimates that are far more predictive than nominal parameter values. In this paper, we develop a self-similar asymptotic solution for axisymmetric jet-in-crossflow interactions and derive analytical estimates of the parameters that were inferred using Bayesian calibration. The self-similar method utilizes a near field approach to estimate the turbulence model parameters while retaining the classical far-field scaling to model flow field quantities. Our parameter values are seen to be far more predictive than the nominal values, as checked using RANS simulations and experimental measurements. They are also closer to the Bayesian estimates than the nominal parameters. A traditional simplified jet trajectory model is explicitly related to the turbulence model parameters and is shown to yield good agreement with measurement when utilizing the analytical derived turbulence model coefficients. Finally, the close agreement between the turbulence model coefficients obtained via Bayesian calibration and the analytically estimated coefficients derived in this paper is consistent with the contention that the Bayesian calibration approach is firmly rooted in the underlying physical description.« less

  4. About influence of input rate random part of nonstationary queue system on statistical estimates of its macroscopic indicators

    NASA Astrophysics Data System (ADS)

    Korelin, Ivan A.; Porshnev, Sergey V.

    2018-05-01

    A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.

  5. Effect of modulated ultrasound parameters on ultrasound-induced thrombolysis.

    PubMed

    Soltani, Azita; Volz, Kim R; Hansmann, Doulas R

    2008-12-07

    The potential of ultrasound to enhance enzyme-mediated thrombolysis by application of constant operating parameters (COP) has been widely demonstrated. In this study, the effect of ultrasound with modulated operating parameters (MOP) on enzyme-mediated thrombolysis was investigated. The MOP protocol was applied to an in vitro model of thrombolysis. The results were compared to a COP with the equivalent soft tissue thermal index (TIS) over the duration of ultrasound exposure of 30 min (p < 0.14). To explore potential differences in the mechanism responsible for ultrasound-induced thrombolysis, a perfusion model was used to measure changes in average fibrin pore size of clot before, after and during exposure to MOP and COP protocols and cavitational activity was monitored in real time for both protocols using a passive cavitation detection system. The relative lysis enhancement by each COP and MOP protocol compared to alteplase alone yielded values of 33.69 +/- 12.09% and 63.89 +/- 15.02% in a thrombolysis model, respectively (p < 0.007). Both COP and MOP protocols caused an equivalent significant increase in average clot pore size of 2.09 x 10(-2) +/- 0.01 microm and 1.99 x 10(-2) +/- 0.004 microm, respectively (p < 0.74). No signatures of inertial or stable cavitation were observed for either acoustic protocol. In conclusion, due to mechanisms other than cavitation, application of ultrasound with modulated operating parameters has the potential to significantly enhance the relative lysis enhancement compared to application of ultrasound with constant operating parameters.

  6. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  7. Domain-averaged snow depth over complex terrain from flat field measurements

    NASA Astrophysics Data System (ADS)

    Helbig, Nora; van Herwijnen, Alec

    2017-04-01

    Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.

  8. The application of time series models to cloud field morphology analysis

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  9. Application of a Combined Model with Autoregressive Integrated Moving Average (ARIMA) and Generalized Regression Neural Network (GRNN) in Forecasting Hepatitis Incidence in Heng County, China

    PubMed Central

    Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui

    2016-01-01

    Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555

  10. The cost of uniqueness in groundwater model calibration

    NASA Astrophysics Data System (ADS)

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.

  11. Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Sharpe, Jacob A.

    2014-01-01

    A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.

  12. Averages of $b$-hadron, $c$-hadron, and $$\\tau$$-lepton properties as of summer 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amhis, Y.; et al.

    2014-12-23

    This article reports world averages of measurements ofmore » $b$-hadron, $c$-hadron, and $$\\tau$$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.« less

  13. Hybrid Support Vector Regression and Autoregressive Integrated Moving Average Models Improved by Particle Swarm Optimization for Property Crime Rates Forecasting with Economic Indicators

    PubMed Central

    Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729

  14. Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators.

    PubMed

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  15. The evaluation of the average energy parameters for spectra of quasimonoenergetic neutrons produced in (p,n)-reactions on solid tritium targets

    NASA Astrophysics Data System (ADS)

    Sosnin, A. N.; Shorin, V. S.

    1989-10-01

    Fast neutron cross-section measurements using quasimonoenergetic (p,n) neutron sources require the determination of the average neutron spectrum parameters such as the mean energy < E> and the variance D. In this paper a simple model has been considered for determining the < E>- andD-values. The approach takes into account the actual layout of the solid tritium target and the irradiated sample. It is valid for targets with a thickness of less than 1 mg/cm 2. It has been shown that the first and the second tritium distribution function moments < x> and < x2> are connected by simple analytical expressions with average characteristics of the neutron yield measured above the (p,n) reaction threshold energy. Our results are compared with accurate calculations for Sc-T targets.

  16. Improved hydrological-model design by integrating nutrient and water flow

    NASA Astrophysics Data System (ADS)

    Arheimer, B.; Lindstrom, G.

    2013-12-01

    The potential of integrating hydrologic and nutrient concentration data to better understand patterns of catchment response and to better design hydrological modeling was explored using a national multi-basin model system for Sweden, called ';S-HYPE'. The model system covers more than 450 000 km2 and produce daily values of nutrient concentration and water discharge in 37 000 catchments from 1961 and onwards. It is based on the processed-based and semi-distributed HYdrological Predictions for the Environment (HYPE) code. The model is used operationally for assessments of water status or climate change impacts and for forecasts by the national warning service of floods, droughts and fire. The first model was launched in 2008, but S-HYPE is continuously improved and released in new versions every second year. Observations are available in 400 sites for daily water discharge and some 900 sites for monthly grab samples of nutrient concentrations. The latest version (2012) has an average NSE for water discharge of 0.7 and an average relative error of 5%, including both regulated and unregulated rivers with catchments from ten to several thousands of km2 and various landuse. The daily relative errors of nutrient concentrations are on average 20% for total Nitrogen and 35% for total Phosphorus. This presentation will give practical examples of how the nutrient data has been used to trace errors or inadequate parameter values in the hydrological model. Since 2008 several parts of the model structure has been reconsidered both in the source code, parameter values and input data of catchment characteristics. In this process water quality has been guiding much of the overall model design of catchment hydrological functions and routing along the river network. The model structure has thus been developed iteratively when evaluating results and checking time-series. Examples of water quality driven improvements will be given for estimation of vertical flow paths, such as separation of the hydrograph in surface flow, snow melt and baseflow, as well as horizontal flow paths in the landscape, such as mixing from various land use, impact from lakes and river channel volume. Overall, the S-HYPE model performance of water discharge increased from NSE 0.55 to 0.69 as an average for 400 gauges between the version 2010 and 2012. Most of this improvement, however, can be referred to improved regulations routines, rating curves for major lakes and parameters correcting ET and precipitation. Nevertheless, integrated water and nutrient modeling put constraints on the hydrological parameter values, which reduce equifinality for the hydrological part without reducing the model performance. The examples illustrates that the credibility of the hydrological model structure is thus improved by integrating water and nutrient flow. This lead to improved understanding of flow paths and water-nutrient process interactions in Sweden, which in turn will be very useful in further model analysis on impact of climate change or measures to reduce nutrient load from rivers to the Baltic Sea.

  17. Mathematical models for predicting human mobility in the context of infectious disease spread: introducing the impedance model.

    PubMed

    Sallah, Kankoé; Giorgi, Roch; Bengtsson, Linus; Lu, Xin; Wetter, Erik; Adrien, Paul; Rebaudet, Stanislas; Piarroux, Renaud; Gaudart, Jean

    2017-11-22

    Mathematical models of human mobility have demonstrated a great potential for infectious disease epidemiology in contexts of data scarcity. While the commonly used gravity model involves parameter tuning and is thus difficult to implement without reference data, the more recent radiation model based on population densities is parameter-free, but biased. In this study we introduce the new impedance model, by analogy with electricity. Previous research has compared models on the basis of a few specific available spatial patterns. In this study, we use a systematic simulation-based approach to assess the performances. Five hundred spatial patterns were generated using various area sizes and location coordinates. Model performances were evaluated based on these patterns. For simulated data, comparison measures were average root mean square error (aRMSE) and bias criteria. Modeling of the 2010 Haiti cholera epidemic with a basic susceptible-infected-recovered (SIR) framework allowed an empirical evaluation through assessing the goodness-of-fit of the observed epidemic curve. The new, parameter-free impedance model outperformed previous models on simulated data according to average aRMSE and bias criteria. The impedance model achieved better performances with heterogeneous population densities and small destination populations. As a proof of concept, the basic compartmental SIR framework was used to confirm the results obtained with the impedance model in predicting the spread of cholera in Haiti in 2010. The proposed new impedance model provides accurate estimations of human mobility, especially when the population distribution is highly heterogeneous. This model can therefore help to achieve more accurate predictions of disease spread in the context of an epidemic.

  18. Ammonium Removal from Aqueous Solutions by Clinoptilolite: Determination of Isotherm and Thermodynamic Parameters and Comparison of Kinetics by the Double Exponential Model and Conventional Kinetic Models

    PubMed Central

    Tosun, İsmail

    2012-01-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177

  19. Ammonium removal from aqueous solutions by clinoptilolite: determination of isotherm and thermodynamic parameters and comparison of kinetics by the double exponential model and conventional kinetic models.

    PubMed

    Tosun, Ismail

    2012-03-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.

  20. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  1. Atypical birefringence pattern and the diagnostic ability of scanning laser polarimetry with enhanced corneal compensation in glaucoma.

    PubMed

    Rao, Harsha L; Yadav, Ravi K; Begum, Viquar U; Addepalli, Uday K; Senthil, Sirisha; Choudhari, Nikhil S; Garudadri, Chandra S

    2015-03-01

    To evaluate the effect of typical scan score (TSS), when within the acceptable limits, on the diagnostic performance of retinal nerve fibre layer (RNFL) parameters with the enhanced corneal compensation (ECC) protocol of scanning laser polarimetry (SLP) in glaucoma. In a cross-sectional study, 203 eyes of 160 glaucoma patients and 140 eyes of 104 control subjects underwent RNFL imaging with the ECC protocol of SLP. TSS was used to quantify atypical birefringence pattern (ABP) images. Influence of TSS on the diagnostic ability of SLP parameters was evaluated by receiver operating characteristic (ROC) regression models after adjusting for the effect of disease severity [based on mean deviation (MD)] on standard automated perimetry). Diagnostic abilities of all RNFL parameters of SLP increased when the TSS values were higher. This effect was statistically significant for TSNIT (coefficient: 0.08, p<0.001) and inferior average parameters (coefficient: 0.06, p=0.002) but not for nerve fibre indicator (NFI, coefficient: 0.03, p=0.21). In early glaucoma (MD of -5 dB), predicted area under ROC curve (AUC) for TSNIT average parameter improved from 0.642 at a TSS of 90 to 0.845 at a TSS of 100. In advanced glaucoma (MD of -15 dB), AUC for TSNIT average improved from 0.832 at a TSS of 90 to 0.947 at 100. Diagnostic performances of TSNIT and inferior average RNFL parameters with ECC protocol of SLP were significantly influenced by TSS even when the TSS values were within the acceptable limits. Diagnostic ability of NFI was unaffected by TSS values. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  2. A variant of sparse partial least squares for variable selection and data exploration.

    PubMed

    Olson Hunt, Megan J; Weissfeld, Lisa; Boudreau, Robert M; Aizenstein, Howard; Newman, Anne B; Simonsick, Eleanor M; Van Domelen, Dane R; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina

    2014-01-01

    When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed "all-possible" SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a "large" number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.

  3. Measurement and modelling of the y-direction apparent mass of sitting human body-cushioned seat system

    NASA Astrophysics Data System (ADS)

    Stein, George Juraj; Múčka, Peter; Hinz, Barbara; Blüthner, Ralph

    2009-04-01

    Laboratory tests were conducted using 13 male subjects seated on a cushioned commercial vehicle driver's seat. The hands gripped a mock-up steering wheel and the subjects were in contact with the lumbar region of the backrest. The accelerations and forces in the y-direction were measured during random lateral whole-body vibration with a frequency range between 0.25 and 30 Hz, vibration magnitudes 0.30, 0.98, and 1.92 m s -2 (unweighted root mean square (rms)). Based on these laboratory measurements, a linear multi-degree-of-freedom (mdof) model of the seated human body and cushioned seat in the lateral direction ( y-axis) was developed. Model parameters were identified from averaged measured apparent mass values (modulus and phase) for the three excitation magnitudes mentioned. A preferred model structure was selected from four 3-dof models analysed. The mean subject parameters were identified. In addition, identification of each subject's apparent mass model parameters was performed. The results are compared with previous studies. The developed model structure and the identified parameters can be used for further biodynamical research in seating dynamics.

  4. Test techniques for model development of repetitive service energy storage capacitors

    NASA Astrophysics Data System (ADS)

    Thompson, M. C.; Mauldin, G. H.

    1984-03-01

    The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.

  5. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    NASA Astrophysics Data System (ADS)

    Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.

    2009-12-01

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.

  6. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Oblozinsky, P.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  7. RIPL-Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Capote,R.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  8. The Impact of Model and Rainfall Forcing Errors on Characterizing Soil Moisture Uncertainty in Land Surface Modeling

    NASA Technical Reports Server (NTRS)

    Maggioni, V.; Anagnostou, E. N.; Reichle, R. H.

    2013-01-01

    The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.

  9. A universal surface complexation framework for modeling proton binding onto bacterial surfaces in geologic settings

    USGS Publications Warehouse

    Borrok, D.; Turner, B.F.; Fein, J.B.

    2005-01-01

    Adsorption onto bacterial cell walls can significantly affect the speciation and mobility of aqueous metal cations in many geologic settings. However, a unified thermodynamic framework for describing bacterial adsorption reactions does not exist. This problem originates from the numerous approaches that have been chosen for modeling bacterial surface protonation reactions. In this study, we compile all currently available potentiometric titration datasets for individual bacterial species, bacterial consortia, and bacterial cell wall components. Using a consistent, four discrete site, non-electrostatic surface complexation model, we determine total functional group site densities for all suitable datasets, and present an averaged set of 'universal' thermodynamic proton binding and site density parameters for modeling bacterial adsorption reactions in geologic systems. Modeling results demonstrate that the total concentrations of proton-active functional group sites for the 36 bacterial species and consortia tested are remarkably similar, averaging 3.2 ?? 1.0 (1??) ?? 10-4 moles/wet gram. Examination of the uncertainties involved in the development of proton-binding modeling parameters suggests that ignoring factors such as bacterial species, ionic strength, temperature, and growth conditions introduces relatively small error compared to the unavoidable uncertainty associated with the determination of cell abundances in realistic geologic systems. Hence, we propose that reasonable estimates of the extent of bacterial cell wall deprotonation can be made using averaged thermodynamic modeling parameters from all of the experiments that are considered in this study, regardless of bacterial species used, ionic strength, temperature, or growth condition of the experiment. The average site densities for the four discrete sites are 1.1 ?? 0.7 ?? 10-4, 9.1 ?? 3.8 ?? 10-5, 5.3 ?? 2.1 ?? 10-5, and 6.6 ?? 3.0 ?? 10-5 moles/wet gram bacteria for the sites with pKa values of 3.1, 4.7, 6.6, and 9.0, respectively. It is our hope that this thermodynamic framework for modeling bacteria-proton binding reactions will also provide the basis for the development of an internally consistent set of bacteria-metal binding constants. 'Universal' constants for bacteria-metal binding reactions can then be used in conjunction with equilibrium constants for other important metal adsorption and complexation reactions to calculate the overall distribution of metals in realistic geologic systems.

  10. Optimization of Surface Roughness Parameters of Al-6351 Alloy in EDC Process: A Taguchi Coupled Fuzzy Logic Approach

    NASA Astrophysics Data System (ADS)

    Kar, Siddhartha; Chakraborty, Sujoy; Dey, Vidyut; Ghosh, Subrata Kumar

    2017-10-01

    This paper investigates the application of Taguchi method with fuzzy logic for multi objective optimization of roughness parameters in electro discharge coating process of Al-6351 alloy with powder metallurgical compacted SiC/Cu tool. A Taguchi L16 orthogonal array was employed to investigate the roughness parameters by varying tool parameters like composition and compaction load and electro discharge machining parameters like pulse-on time and peak current. Crucial roughness parameters like Centre line average roughness, Average maximum height of the profile and Mean spacing of local peaks of the profile were measured on the coated specimen. The signal to noise ratios were fuzzified to optimize the roughness parameters through a single comprehensive output measure (COM). Best COM obtained with lower values of compaction load, pulse-on time and current and 30:70 (SiC:Cu) composition of tool. Analysis of variance is carried out and a significant COM model is observed with peak current yielding highest contribution followed by pulse-on time, compaction load and composition. The deposited layer is characterised by X-Ray Diffraction analysis which confirmed the presence of tool materials on the work piece surface.

  11. Estimating catchment-scale groundwater dynamics from recession analysis - enhanced constraining of hydrological models

    NASA Astrophysics Data System (ADS)

    Skaugen, Thomas; Mengistu, Zelalem

    2016-12-01

    In this study, we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of catchment-scale storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff. The parameters are hence estimated prior to model calibration against runoff. The new storage routine is implemented in the parameter parsimonious distance distribution dynamics (DDD) model and has been tested for 73 catchments in Norway of varying size, mean elevation and landscape type. Runoff simulations for the 73 catchments from two model structures (DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage) were compared. Little loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe efficiency criterion of 0.73 was obtained using the new estimated storage routine compared with 0.75 using calibrated storage routine. The average Kling-Gupta efficiency criterion was 0.80 and 0.81 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recession characteristics was reduced by almost 50 % using the new storage routine. The parameters of the proposed storage routine are found to be significantly correlated to catchment characteristics, which is potentially useful for predictions in ungauged basins.

  12. Models of H II regions - Heavy element opacity, variation of temperature

    NASA Technical Reports Server (NTRS)

    Rubin, R. H.

    1985-01-01

    A detailed set of H II region models that use the same physics and self-consistent input have been computed and are used to examine where in parameter space the effects of heavy element opacity is important. The models are briefly described, and tabular data for the input parameters and resulting properties of the models are presented. It is found that the opacities of C, Ne, O, and to a lesser extent N play a vital role over a large region of parameter space, while S and Ar opacities are negligible. The variation of the average electron temperature T(e) of the models with metal abundance, density, and T(eff) is investigated. It is concluded that by far the most important determinator of T(e) is metal abundance; an almost 7000 K difference is expected over the factor of 10 change from up to down abundances.

  13. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    DOE PAGES

    Amhis, Y.; Banerjee, Sw.; Ben-Haim, E.; ...

    2017-12-21

    Here, this article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters,more » $$C\\!P$$  violation parameters, parameters of semileptonic decays, and Cabbibo–Kobayashi–Maskawa matrix elements.« less

  14. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amhis, Y.; Banerjee, Sw.; Ben-Haim, E.

    Here, this article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters,more » $$C\\!P$$  violation parameters, parameters of semileptonic decays, and Cabbibo–Kobayashi–Maskawa matrix elements.« less

  15. Modeling of Solid State Transformer for the FREEDM System Demonstration

    NASA Astrophysics Data System (ADS)

    Jiang, Youyuan

    The Solid State Transformer (SST) is an essential component in the FREEDM system. This research focuses on the modeling of the SST and the controller hardware in the loop (CHIL) implementation of the SST for the support of the FREEDM system demonstration. The energy based control strategy for a three-stage SST is analyzed and applied. A simplified average model of the three-stage SST that is suitable for simulation in real time digital simulator (RTDS) has been developed in this study. The model is also useful for general time-domain power system analysis and simulation. The proposed simplified av-erage model has been validated in MATLAB and PLECS. The accuracy of the model has been verified through comparison with the cycle-by-cycle average (CCA) model and de-tailed switching model. These models are also implemented in PSCAD, and a special strategy to implement the phase shift modulation has been proposed to enable the switching model simulation in PSCAD. The implementation of the CHIL test environment of the SST in RTDS is described in this report. The parameter setup of the model has been discussed in detail. One of the dif-ficulties is the choice of the damping factor, which is revealed in this paper. Also the grounding of the system has large impact on the RTDS simulation. Another problem is that the performance of the system is highly dependent on the switch parameters such as voltage and current ratings. Finally, the functionalities of the SST have been realized on the platform. The distributed energy storage interface power injection and reverse power flow have been validated. Some limitations are noticed and discussed through the simulation on RTDS.

  16. Adaptation of model proteins from cold to hot environments involves continuous and small adjustments of average parameters related to amino acid composition.

    PubMed

    De Vendittis, Emmanuele; Castellano, Immacolata; Cotugno, Roberta; Ruocco, Maria Rosaria; Raimo, Gennaro; Masullo, Mariorosario

    2008-01-07

    The growth temperature adaptation of six model proteins has been studied in 42 microorganisms belonging to eubacterial and archaeal kingdoms, covering optimum growth temperatures from 7 to 103 degrees C. The selected proteins include three elongation factors involved in translation, the enzymes glyceraldehyde-3-phosphate dehydrogenase and superoxide dismutase, the cell division protein FtsZ. The common strategy of protein adaptation from cold to hot environments implies the occurrence of small changes in the amino acid composition, without altering the overall structure of the macromolecule. These continuous adjustments were investigated through parameters related to the amino acid composition of each protein. The average value per residue of mass, volume and accessible surface area allowed an evaluation of the usage of bulky residues, whereas the average hydrophobicity reflected that of hydrophobic residues. The specific proportion of bulky and hydrophobic residues in each protein almost linearly increased with the temperature of the host microorganism. This finding agrees with the structural and functional properties exhibited by proteins in differently adapted sources, thus explaining the great compactness or the high flexibility exhibited by (hyper)thermophilic or psychrophilic proteins, respectively. Indeed, heat-adapted proteins incline toward the usage of heavier-size and more hydrophobic residues with respect to mesophiles, whereas the cold-adapted macromolecules show the opposite behavior with a certain preference for smaller-size and less hydrophobic residues. An investigation on the different increase of bulky residues along with the growth temperature observed in the six model proteins suggests the relevance of the possible different role and/or structure organization played by protein domains. The significance of the linear correlations between growth temperature and parameters related to the amino acid composition improved when the analysis was collectively carried out on all model proteins.

  17. Statistical properties of the time histories of cosmic gamma-ray bursts detected by the BATSE experiment of the Compton gamma-ray observatory

    NASA Technical Reports Server (NTRS)

    Sagdeev, Roald

    1995-01-01

    The main scientific objectives of the project were: (1) Calculation of average time history for different subsets of BATSE gamma-ray bursts; (2) Comparison of averaged parameters and averaged time history for different Burst And Transient Source Experiments (BASTE) Gamma Ray Bursts (GRB's) sets; (3) Comparison of results obtained with BATSE data with those obtained with APEX experiment at PHOBOS mission; and (4) Use the results of (1)-(3) to compare current models of gamma-ray bursts sources.

  18. The average motion of a charged particle in a dipole field

    NASA Technical Reports Server (NTRS)

    Chen, A. J.; Stern, D. P.

    1974-01-01

    The numerical representation of the average motion of a charged particle trapped in a geomagnetic field is developed. An assumption is made of the conservation of the first two adiabatic invariants where integration is along a field line between mirror points. The averaged motion also involved the parameters defining the magnetic field line to which the particle is attached. Methods involved in obtaining the motion in the equatorial plane of model magnetospheres are based on Hamiltonian functions. The restrictions imposed by the special nature of the dipole field are defined.

  19. A climatology of gravity wave parameters based on satellite limb soundings

    NASA Astrophysics Data System (ADS)

    Ern, Manfred; Trinh, Quang Thai; Preusse, Peter; Riese, Martin

    2017-04-01

    Gravity waves are one of the main drivers of atmospheric dynamics. The resolution of most global circulation models (GCMs) and chemistry climate models (CCMs), however, is too coarse to properly resolve the small scales of gravity waves. Horizontal scales of gravity waves are in the range of tens to a few thousand kilometers. Gravity wave source processes involve even smaller scales. Therefore GCMs/CCMs usually parametrize the effect of gravity waves on the global circulation. These parametrizations are very simplified, and comparisons with global observations of gravity waves are needed for an improvement of parametrizations and an alleviation of model biases. In our study, we present a global data set of gravity wave distributions observed in the stratosphere and the mesosphere by the infrared limb sounding satellite instruments High Resolution Dynamics Limb Sounder (HIRDLS) and Sounding of the Atmosphere using Broadband Emission Radiometry (SABER). We provide various gravity wave parameters (for example, gravity variances, potential energies and absolute momentum fluxes). This comprehensive climatological data set can serve for comparison with other instruments (ground based, airborne, or other satellite instruments), as well as for comparison with gravity wave distributions, both resolved and parametrized, in GCMs and CCMs. The purpose of providing various different parameters is to make our data set useful for a large number of potential users and to overcome limitations of other observation techniques, or of models, that may be able to provide only one of those parameters. We present a climatology of typical average global distributions and of zonal averages, as well as their natural range of variations. In addition, we discuss seasonal variations of the global distribution of gravity waves, as well as limitations of our method of deriving gravity wave parameters from satellite data.

  20. Data free inference with processed data products

    DOE PAGES

    Chowdhary, K.; Najm, H. N.

    2014-07-12

    Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.

  1. A history of presatellite investigations of the earth's radiation budget

    NASA Technical Reports Server (NTRS)

    Hunt, G. E.; Kandel, R.; Mecherikunnel, A. T.

    1986-01-01

    The history of radiation budget studies from the early twentieth century to the advent of the space age is reviewed. By the beginning of the 1960's, accurate radiative models had been developed capable of estimating the global and zonally averaged components of the radiation budget, though great uncertainty in the derived parameters existed due to inaccuracy of the data describing the physical parameters used in the model, associated with clouds, the solar radiation, and the gaseous atmospheric absorbers. Over the century, the planetary albedo estimates had reduced from 89 to 30 percent.

  2. Predicting root zone soil moisture with soil properties and satellite near-surface moisture data across the conterminous United States

    NASA Astrophysics Data System (ADS)

    Baldwin, D.; Manfreda, S.; Keller, K.; Smithwick, E. A. H.

    2017-03-01

    Satellite-based near-surface (0-2 cm) soil moisture estimates have global coverage, but do not capture variations of soil moisture in the root zone (up to 100 cm depth) and may be biased with respect to ground-based soil moisture measurements. Here, we present an ensemble Kalman filter (EnKF) hydrologic data assimilation system that predicts bias in satellite soil moisture data to support the physically based Soil Moisture Analytical Relationship (SMAR) infiltration model, which estimates root zone soil moisture with satellite soil moisture data. The SMAR-EnKF model estimates a regional-scale bias parameter using available in situ data. The regional bias parameter is added to satellite soil moisture retrievals before their use in the SMAR model, and the bias parameter is updated continuously over time with the EnKF algorithm. In this study, the SMAR-EnKF assimilates in situ soil moisture at 43 Soil Climate Analysis Network (SCAN) monitoring locations across the conterminous U.S. Multivariate regression models are developed to estimate SMAR parameters using soil physical properties and the moderate resolution imaging spectroradiometer (MODIS) evapotranspiration data product as covariates. SMAR-EnKF root zone soil moisture predictions are in relatively close agreement with in situ observations when using optimal model parameters, with root mean square errors averaging 0.051 [cm3 cm-3] (standard error, s.e. = 0.005). The average root mean square error associated with a 20-fold cross-validation analysis with permuted SMAR parameter regression models increases moderately (0.082 [cm3 cm-3], s.e. = 0.004). The expected regional-scale satellite correction bias is negative in four out of six ecoregions studied (mean = -0.12 [-], s.e. = 0.002), excluding the Great Plains and Eastern Temperate Forests (0.053 [-], s.e. = 0.001). With its capability of estimating regional-scale satellite bias, the SMAR-EnKF system can predict root zone soil moisture over broad extents and has applications in drought predictions and other operational hydrologic modeling purposes.

  3. Inverse Modeling of Hydrologic Parameters Using Surface Flux and Runoff Observations in the Community Land Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yu; Hou, Zhangshuan; Huang, Maoyi

    2013-12-10

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find thatmore » using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.« less

  4. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.

  5. Temporal modelling and forecasting of the airborne pollen of Cupressaceae on the southwestern Iberian Peninsula.

    PubMed

    Silva-Palacios, Inmaculada; Fernández-Rodríguez, Santiago; Durán-Barroso, Pablo; Tormo-Molina, Rafael; Maya-Manzano, José María; Gonzalo-Garijo, Ángela

    2016-02-01

    Cupressaceae includes species cultivated as ornamentals in the urban environment. This study aims to investigate airborne pollen data for Cupressaceae on the southwestern Iberian Peninsula over a 21-year period and to analyse the trends in these data and their relationship with meteorological parameters using time series analysis. Aerobiological sampling was conducted from 1993 to 2013 in Badajoz (SW Spain). The main pollen season for Cupressaceae lasted, on average, 58 days, ranging from 55 to 112 days, from 24 January to 22 March. Furthermore, a short-term forecasting model has been developed for daily pollen concentrations. The model proposed to forecast the airborne pollen concentration is described by one equation. This expression is composed of two terms: the first term represents the pollen concentration trend in the air according to the average concentration of the previous 10 days; the second term is obtained from considering the actual pollen concentration value, which is calculated based on the most representative meteorological parameters multiplied by a fitting coefficient. Temperature was the main meteorological factor by its influence over daily pollen forecast, being the rain the second most important factor. This model represents a good approach to a continuous balance model of Cupressaceae pollen concentration and is supported by a close agreement between the observed and predicted mean concentrations. The novelty of the proposed model is the analysis of meteorological parameters that are not frequently used in Aerobiology.

  6. Exploring Several Methods of Groundwater Model Selection

    NASA Astrophysics Data System (ADS)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  7. TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Gordon, J; Chetty, I

    2014-06-15

    Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less

  8. A model study of aggregates composed of spherical soot monomers with an acentric carbon shell

    NASA Astrophysics Data System (ADS)

    Luo, Jie; Zhang, Yongming; Zhang, Qixing

    2018-01-01

    Influences of morphology on the optical properties of soot particles have gained increasing attentions. However, studies on the effect of the way primary particles are coated on the optical properties is few. Aimed to understand how the primary particles are coated affect the optical properties of soot particles, the coated soot particle was simulated using the acentric core-shell monomers model (ACM), which was generated by randomly moving the cores of concentric core-shell monomers (CCM) model. Single scattering properties of the CCM model with identical fractal parameters were calculated 50 times at first to evaluate the optical diversities of different realizations of fractal aggregates with identical parameters. The results show that optical diversities of different realizations for fractal aggregates with identical parameters cannot be eliminated by averaging over ten random realizations. To preserve the fractal characteristics, 10 realizations of each model were generated based on the identical 10 parent fractal aggregates, and then the results were averaged over each 10 realizations, respectively. The single scattering properties of all models were calculated using the numerically exact multiple-sphere T-matrix (MSTM) method. It is found that the single scattering properties of randomly coated soot particles calculated using the ACM model are extremely close to those using CCM model and homogeneous aggregate (HA) model using Maxwell-Garnett effective medium theory. Our results are different from previous studies. The reason may be that the differences in previous studies were caused by fractal characteristics but not models. Our findings indicate that how the individual primary particles are coated has little effect on the single scattering properties of soot particles with acentric core-shell monomers. This work provides a suggestion for scattering model simplification and model selection.

  9. Influence of primary fragment excitation energy and spin distributions on fission observables

    NASA Astrophysics Data System (ADS)

    Litaize, Olivier; Thulliez, Loïc; Serot, Olivier; Chebboubi, Abdelaziz; Tamagno, Pierre

    2018-03-01

    Fission observables in the case of 252Cf(sf) are investigated by exploring several models involved in the excitation energy sharing and spin-parity assignment between primary fission fragments. In a first step the parameters used in the FIFRELIN Monte Carlo code "reference route" are presented: two parameters for the mass dependent temperature ratio law and two constant spin cut-off parameters for light and heavy fragment groups respectively. These parameters determine the initial fragment entry zone in excitation energy and spin-parity (E*, Jπ). They are chosen to reproduce the light and heavy average prompt neutron multiplicities. When these target observables are achieved all other fission observables can be predicted. We show here the influence of input parameters on the saw-tooth curve and we discuss the influence of a mass and energy-dependent spin cut-off model on gamma-rays related fission observables. The part of the model involving level densities, neutron transmission coefficients or photon strength functions remains unchanged.

  10. How does the cosmic large-scale structure bias the Hubble diagram?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleury, Pierre; Clarkson, Chris; Maartens, Roy, E-mail: pierre.fleury@uct.ac.za, E-mail: chris.clarkson@qmul.ac.uk, E-mail: roy.maartens@gmail.com

    2017-03-01

    The Hubble diagram is one of the cornerstones of observational cosmology. It is usually analysed assuming that, on average, the underlying relation between magnitude and redshift matches the prediction of a Friedmann-Lemaître-Robertson-Walker model. However, the inhomogeneity of the Universe generically biases these observables, mainly due to peculiar velocities and gravitational lensing, in a way that depends on the notion of average used in theoretical calculations. In this article, we carefully derive the notion of average which corresponds to the observation of the Hubble diagram. We then calculate its bias at second-order in cosmological perturbations, and estimate the consequences on themore » inference of cosmological parameters, for various current and future surveys. We find that this bias deeply affects direct estimations of the evolution of the dark-energy equation of state. However, errors in the standard inference of cosmological parameters remain smaller than observational uncertainties, even though they reach percent level on some parameters; they reduce to sub-percent level if an optimal distance indicator is used.« less

  11. Impact of Canopy Coupling on Canopy Average Stomatal Conductance Across Seven Tree Species in Northern Wisconsin

    NASA Astrophysics Data System (ADS)

    Ewers, B. E.; Mackay, D. S.; Samanta, S.; Ahl, D. E.; Burrows, S. S.; Gower, S. T.

    2001-12-01

    Land use changes over the last century in northern Wisconsin have resulted in a heterogeneous landscape composed of the following four main forest types: northern hardwoods, northern conifer, aspen/fir, and forested wetland. Based on sap flux measurements, aspen/fir has twice the canopy transpiration of northern hardwoods. In addition, daily transpiration was only explained by daily average vapor pressure deficit across the cover types. The objective of this study was to determine if canopy average stomatal conductance could be used to explain the species effects on tree transpiration. Our first hypothesis is that across all of the species, stomatal conductance will respond to vapor pressure deficit so as to maintain a minimum leaf water potential to prevent catostrophic cavitiation. The consequence of this hypothesis is that among species and individuals there is a proportionality between high stomatal conductance and the sensitivity of stomatal conductance to vapor pressure deficit. Our second hypothesis is that species that do not follow the proportionality deviate because the canopies are decoupled from the atmosphere. To test our two hypotheses we calculated canopy average stomatal conductance from sap flux measurements using an inversion of the Penman-Monteith equation. We estimated the canopy coupling using a leaf energy budget model that requires leaf transpiration and canopy aerodynamic conductance. We optimized the parameters of the aerodynamic conductance model using a Monte Carlo technique across six parameters. We determined the optimal model for each species by selecting parameter sets that resulted in the proportionality of our first hypothesis. We then tested the optimal energy budget models of each species by comparing leaf temperature and leaf width predicted by the models to measurements of each tree species. In red pine, sugar maple, and trembling aspen trees under high canopy coupling conditions, we found the hypothesized proportionality between high stomatal conductance and the sensitivity of stomatal conductance to vapor pressure deficit. In addition, the canopy conductance of trembling aspen was twice as high as sugar maple and the aspen trees showed much more variability.

  12. A Similarity Criterion for Supersonic Flow Past a Cylinder with a Frontal High-Porosity Cellular Insert

    NASA Astrophysics Data System (ADS)

    Mironov, S. G.; Poplavskaya, T. V.; Kirilovskiy, S. V.; Maslov, A. A.

    2018-03-01

    We have experimentally and numerically studied the influence of the ratio of the diameter of a cylinder with a frontal gas-permeable porous insert made of nickel sponge to the average pore diameter in the insert on the aerodynamic drag of this model body in supersonic airflow ( M ∞ = 4.85, 7, and 21). The analytical dependence of the normalized drag coefficient on a parameter involving the Mach number and the ratio of cylinder radius to average pore radius in the insert is obtained. It is suggested to use this parameter as a similarity criterion in the problem of supersonic airflow past a cylinder with a frontal high-porosity cellular insert.

  13. A coupled metabolic-hydraulic model and calibration scheme for estimating of whole-river metabolism during dynamic flow conditions

    USGS Publications Warehouse

    Payn, Robert A.; Hall, Robert O Jr.; Kennedy, Theodore A.; Poole, Geoff C; Marshall, Lucy A.

    2017-01-01

    Conventional methods for estimating whole-stream metabolic rates from measured dissolved oxygen dynamics do not account for the variation in solute transport times created by dynamic flow conditions. Changes in flow at hourly time scales are common downstream of hydroelectric dams (i.e. hydropeaking), and hydrologic limitations of conventional metabolic models have resulted in a poor understanding of the controls on biological production in these highly managed river ecosystems. To overcome these limitations, we coupled a two-station metabolic model of dissolved oxygen dynamics with a hydrologic river routing model. We designed calibration and parameter estimation tools to infer values for hydrologic and metabolic parameters based on time series of water quality data, achieving the ultimate goal of estimating whole-river gross primary production and ecosystem respiration during dynamic flow conditions. Our case study data for model design and calibration were collected in the tailwater of Glen Canyon Dam (Arizona, USA), a large hydropower facility where the mean discharge was 325 m3 s 1 and the average daily coefficient of variation of flow was 0.17 (i.e. the hydropeaking index averaged from 2006 to 2016). We demonstrate the coupled model’s conceptual consistency with conventional models during steady flow conditions, and illustrate the potential bias in metabolism estimates with conventional models during unsteady flow conditions. This effort contributes an approach to solute transport modeling and parameter estimation that allows study of whole-ecosystem metabolic regimes across a more diverse range of hydrologic conditions commonly encountered in streams and rivers.

  14. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  15. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    PubMed Central

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  16. Scaling theory in a model of corrosion and passivation.

    PubMed

    Aarão Reis, F D A; Stafiej, Janusz; Badiali, J-P

    2006-09-07

    We study a model for corrosion and passivation of a metallic surface after small damage of its protective layer using scaling arguments and simulation. We focus on the transition between an initial regime of slow corrosion rate (pit nucleation) to a regime of rapid corrosion (propagation of the pit), which takes place at the so-called incubation time. The model is defined in a lattice in which the states of the sites represent the possible states of the metal (bulk, reactive, and passive) and the solution (neutral, acidic, or basic). Simple probabilistic rules describe passivation of the metal surface, dissolution of the passive layer, which is enhanced in acidic media, and spatially separated electrochemical reactions, which may create pH inhomogeneities in the solution. On the basis of a suitable matching of characteristic times of creation and annihilation of pH inhomogeneities in the solution, our scaling theory estimates the average radius of the dissolved region at the incubation time as a function of the model parameters. Among the main consequences, that radius decreases with the rate of spatially separated reactions and the rate of dissolution in acidic media, and it increases with the diffusion coefficient of H(+) and OH(-) ions in solution. The average incubation time can be written as the sum of a series of characteristic times for the slow dissolution in neutral media, until significant pH inhomogeneities are observed in the dissolved cavity. Despite having a more complex dependence on the model parameters, it is shown that the average incubation time linearly increases with the rate of dissolution in neutral media, under the reasonable assumption that this is the slowest rate of the process. Our theoretical predictions are expected to apply in realistic ranges of values of the model parameters. They are confirmed by numerical simulation in two-dimensional lattices, and the expected extension of the theory to three dimensions is discussed.

  17. Experimental study and thermodynamic modeling for determining the effect of non-polar solvent (hexane)/polar solvent (methanol) ratio and moisture content on the lipid extraction efficiency from Chlorella vulgaris.

    PubMed

    Malekzadeh, Mohammad; Abedini Najafabadi, Hamed; Hakim, Maziar; Feilizadeh, Mehrzad; Vossoughi, Manouchehr; Rashtchian, Davood

    2016-02-01

    In this research, organic solvent composed of hexane and methanol was used for lipid extraction from dry and wet biomass of Chlorella vulgaris. The results indicated that lipid and fatty acid extraction yield was decreased by increasing the moisture content of biomass. However, the maximum extraction efficiency was attained by applying equivolume mixture of hexane and methanol for both dry and wet biomass. Thermodynamic modeling was employed to estimate the effect of hexane/methanol ratio and moisture content on fatty acid extraction yield. Hansen solubility parameter was used in adjusting the interaction parameters of the model, which led to decrease the number of tuning parameters from 6 to 2. The results indicated that the model can accurately estimate the fatty acid recovery with average absolute deviation percentage (AAD%) of 13.90% and 15.00% for the two cases of using 6 and 2 adjustable parameters, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Extracting survival parameters from isothermal, isobaric, and "iso-concentration" inactivation experiments by the "3 end points method".

    PubMed

    Corradini, M G; Normand, M D; Newcomer, C; Schaffner, D W; Peleg, M

    2009-01-01

    Theoretically, if an organism's resistance can be characterized by 3 survival parameters, they can be found by solving 3 simultaneous equations that relate the final survival ratio to the lethal agent's intensity. (For 2 resistance parameters, 2 equations will suffice.) In practice, the inevitable experimental scatter would distort the results of such a calculation or render the method unworkable. Averaging the results obtained with more than 3 final survival ratio triplet combinations, determined in four or more treatments, can remove this impediment. This can be confirmed by the ability of a kinetic inactivation model derived from the averaged parameters to predict survival patterns under conditions not employed in their determination, as demonstrated with published isothermal survival data of Clostridium botulinum spores, isobaric data of Escherichia coli under HPP, and Pseudomonas exposed to hydrogen peroxide. Both the method and the underlying assumption that the inactivation followed a Weibull-Log logistic (WeLL) kinetics were confirmed in this way, indicating that when an appropriate survival model is available, it is possible to predict the entire inactivation curves from several experimental final survival ratios alone. Where applicable, the method could simplify the experimental procedure and lower the cost of microbial resistance determinations. In principle, the methodology can be extended to deteriorative chemical reactions if they too can be characterized by 2 or 3 kinetic parameters.

  19. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522

  20. Comparison of the WSA-ENLIL model with three CME cone types

    NASA Astrophysics Data System (ADS)

    Jang, Soojeong; Moon, Y.; Na, H.

    2013-07-01

    We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.Abstract (2,250 Maximum Characters): We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.

  1. A stochastic fractional dynamics model of space-time variability of rain

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun K.; Travis, James E.

    2013-09-01

    varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.

  2. Accelerated Brain DCE-MRI Using Iterative Reconstruction With Total Generalized Variation Penalty for Quantitative Pharmacokinetic Analysis: A Feasibility Study.

    PubMed

    Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng

    2017-08-01

    To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.

  3. A model for gravity-wave spectra observed by Doppler sounding systems

    NASA Technical Reports Server (NTRS)

    Vanzandt, T. E.

    1986-01-01

    A model for Mesosphere - Stratosphere - Troposphere (MST) radar spectra is developed following the formalism presented by Pinkel (1981). Expressions for the one-dimensional spectra of radial velocity versus frequency and versus radial wave number are presented. Their dependence on the parameters of the gravity-wave spectrum and on the experimental parameters, radar zenith angle and averaging time are described and the conditions for critical tests of the gravity-wave hypothesis are discussed. The model spectra is compared with spectra observed in the Arctic summer mesosphere by the Poker Flat radar. This model applies to any monostatic Doppler sounding system, including MST radar, Doppler lidar and Doppler sonar in the atmosphere, and Doppler sonar in the ocean.

  4. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.

    PubMed

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.

  5. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices

    PubMed Central

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188

  6. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    PubMed

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Averages of B-Hadron, C-Hadron, and tau-lepton properties as of early 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amhis, Y.; et al.

    2012-07-01

    This article reports world averages of measurements of b-hadron, c-hadron, and tau-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through the end of 2011. In some cases results available in the early part of 2012 are included. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays and CKM matrix elements.

  8. Parameter Estimation as a Problem in Statistical Thermodynamics.

    PubMed

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  9. Identification of dominant interactions between climatic seasonality, catchment characteristics and agricultural activities on Budyko-type equation parameter estimation

    NASA Astrophysics Data System (ADS)

    Xing, Wanqiu; Wang, Weiguang; Shao, Quanxi; Yong, Bin

    2018-01-01

    Quantifying precipitation (P) partition into evapotranspiration (E) and runoff (Q) is of great importance for global and regional water availability assessment. Budyko framework serves as a powerful tool to make simple and transparent estimation for the partition, using a single parameter, to characterize the shape of the Budyko curve for a "specific basin", where the single parameter reflects the overall effect by not only climatic seasonality, catchment characteristics (e.g., soil, topography and vegetation) but also agricultural activities (e.g., cultivation and irrigation). At the regional scale, these influencing factors are interconnected, and the interactions between them can also affect the single parameter of Budyko-type equations' estimating. Here we employ the multivariate adaptive regression splines (MARS) model to estimate the Budyko curve shape parameter (n in the Choudhury's equation, one form of the Budyko framework) of the selected 96 catchments across China using a data set of long-term averages for climatic seasonality, catchment characteristics and agricultural activities. Results show average storm depth (ASD), vegetation coverage (M), and seasonality index of precipitation (SI) are three statistically significant factors affecting the Budyko parameter. More importantly, four pairs of interactions are recognized by the MARS model as: The interaction between CA (percentage of cultivated land area to total catchment area) and ASD shows that the cultivation can weaken the reducing effect of high ASD (>46.78 mm) on the Budyko parameter estimating. Drought (represented by the value of Palmer drought severity index < -0.74) and uneven distribution of annual rainfall (represented by the value of coefficient of variation of precipitation > 0.23) tend to enhance the Budyko parameter reduction by large SI (>0.797). Low vegetation coverage (34.56%) is likely to intensify the rising effect on evapotranspiration ratio by IA (percentage of irrigation area to total catchment area). The Budyko n values estimated by the MARS model reproduce the calculated ones by the observation well for the selected 96 catchments (with R = 0.817, MAE = 4.09). Compared to the multiple stepwise regression model estimating the parameter n taken the influencing factors as independent inputs, the MARS model enhances the capability of the Budyko framework for assessing water availability at regional scale using readily available data.

  10. Application of the thermorheologically complex nonlinear Adam-Gibbs model for the glass transition to molecular motion in hydrated proteins.

    PubMed

    Hodge, Ian M

    2006-08-01

    The nonlinear thermorheologically complex Adam Gibbs (extended "Scherer-Hodge") model for the glass transition is applied to enthalpy relaxation data reported by Sartor, Mayer, and Johari for hydrated methemoglobin. A sensible range in values for the average localized activation energy is obtained (100-200 kJ mol(-1)). The standard deviation in the inferred Gaussian distribution of activation energies, computed from the reported KWW beta-parameter, is approximately 30% of the average, consistent with the suggestion that some relaxation processes in hydrated proteins have exceptionally low activation energies.

  11. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  12. A Proposed Approach for Joint Modeling of the Longitudinal and Time-To-Event Data in Heterogeneous Populations: An Application to HIV/AIDS's Disease.

    PubMed

    Roustaei, Narges; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf

    2018-01-01

    In recent years, the joint models have been widely used for modeling the longitudinal and time-to-event data simultaneously. In this study, we proposed an approach (PA) to study the longitudinal and survival outcomes simultaneously in heterogeneous populations. PA relaxes the assumption of conditional independence (CI). We also compared PA with joint latent class model (JLCM) and separate approach (SA) for various sample sizes (150, 300, and 600) and different association parameters (0, 0.2, and 0.5). The average bias of parameters estimation (AB-PE), average SE of parameters estimation (ASE-PE), and coverage probability of the 95% confidence interval (CP) among the three approaches were compared. In most cases, when the sample sizes increased, AB-PE and ASE-PE decreased for the three approaches, and CP got closer to the nominal level of 0.95. When there was a considerable association, PA in comparison with SA and JLCM performed better in the sense that PA had the smallest AB-PE and ASE-PE for the longitudinal submodel among the three approaches for the small and moderate sample sizes. Moreover, JLCM was desirable for the none-association and the large sample size. Finally, the evaluated approaches were applied on a real HIV/AIDS dataset for validation, and the results were compared.

  13. Physical parameter estimation from porcine ex vivo vocal fold dynamics in an inverse problem framework.

    PubMed

    Gómez, Pablo; Schützenberger, Anne; Kniesburges, Stefan; Bohr, Christopher; Döllinger, Michael

    2018-06-01

    This study presents a framework for a direct comparison of experimental vocal fold dynamics data to a numerical two-mass-model (2MM) by solving the corresponding inverse problem of which parameters lead to similar model behavior. The introduced 2MM features improvements such as a variable stiffness and a modified collision force. A set of physiologically sensible degrees of freedom is presented, and three optimization algorithms are compared on synthetic vocal fold trajectories. Finally, a total of 288 high-speed video recordings of six excised porcine larynges were optimized to validate the proposed framework. Particular focus lay on the subglottal pressure, as the experimental subglottal pressure is directly comparable to the model subglottal pressure. Fundamental frequency, amplitude and objective function values were also investigated. The employed 2MM is able to replicate the behavior of the porcine vocal folds very well. The model trajectories' fundamental frequency matches the one of the experimental trajectories in [Formula: see text] of the recordings. The relative error of the model trajectory amplitudes is on average [Formula: see text]. The experiments feature a mean subglottal pressure of 10.16 (SD [Formula: see text]) [Formula: see text]; in the model, it was on average 7.61 (SD [Formula: see text]) [Formula: see text]. A tendency of the model to underestimate the subglottal pressure is found, but the model is capable of inferring trends in the subglottal pressure. The average absolute error between the subglottal pressure in the model and the experiment is 2.90 (SD [Formula: see text]) [Formula: see text] or [Formula: see text]. A detailed analysis of the factors affecting the accuracy in matching the subglottal pressure is presented.

  14. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    PubMed

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  15. Constraints on cosmological parameters in power-law cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rani, Sarita; Singh, J.K.; Altaibayeva, A.

    In this paper, we examine observational constraints on the power law cosmology; essentially dependent on two parameters H{sub 0} (Hubble constant) and q (deceleration parameter). We investigate the constraints on these parameters using the latest 28 points of H(z) data and 580 points of Union2.1 compilation data and, compare the results with the results of ΛCDM . We also forecast constraints using a simulated data set for the future JDEM, supernovae survey. Our studies give better insight into power law cosmology than the earlier done analysis by Kumar [arXiv:1109.6924] indicating it tuning well with Union2.1 compilation data but not withmore » H(z) data. However, the constraints obtained on and i.e. H{sub 0} average and q average using the simulated data set for the future JDEM, supernovae survey are found to be inconsistent with the values obtained from the H(z) and Union2.1 compilation data. We also perform the statefinder analysis and find that the power-law cosmological models approach the standard ΛCDM model as q → −1. Finally, we observe that although the power law cosmology explains several prominent features of evolution of the Universe, it fails in details.« less

  16. A Preliminary Bayesian Analysis of Incomplete Longitudinal Data from a Small Sample: Methodological Advances in an International Comparative Study of Educational Inequality

    ERIC Educational Resources Information Center

    Hsieh, Chueh-An; Maier, Kimberly S.

    2009-01-01

    The capacity of Bayesian methods in estimating complex statistical models is undeniable. Bayesian data analysis is seen as having a range of advantages, such as an intuitive probabilistic interpretation of the parameters of interest, the efficient incorporation of prior information to empirical data analysis, model averaging and model selection.…

  17. Global sensitivity analysis for identifying important parameters of nitrogen nitrification and denitrification under model uncertainty and scenario uncertainty

    NASA Astrophysics Data System (ADS)

    Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong

    2018-06-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen models.

  18. Measurement of regional compliance using 4DCT images for assessment of radiation treatment1

    PubMed Central

    Zhong, Hualiang; Jin, Jian-yue; Ajlouni, Munther; Movsas, Benjamin; Chetty, Indrin J.

    2011-01-01

    Purpose: Radiation-induced damage, such as inflammation and fibrosis, can compromise ventilation capability of local functional units (alveoli) of the lung. Ventilation function as measured with ventilation images, however, is often complicated by the underlying mechanical variations. The purpose of this study is to present a 4DCT-based method to measure the regional ventilation capability, namely, regional compliance, for the evaluation of radiation-induced lung damage. Methods: Six 4DCT images were investigated in this study: One previously used in the generation of a POPI model and the other five acquired at Henry Ford Health System. A tetrahedral geometrical model was created and scaled to encompass each of the 4DCT image domains. Image registrations were performed on each of the 4DCT images using a multiresolution Demons algorithm. The images at the end of exhalation were selected as a reference. Images at other exhalation phases were registered to the reference phase. For the POPI-modeled patient, each of these registration instances was validated using 40 landmarks. The displacement vector fields (DVFs) were used first to calculate the volumetric variation of each tetrahedron, which represents the change in the air volume. The calculated results were interpolated to generate 3D ventilation images. With the computed DVF, a finite element method (FEM) framework was developed to compute the stress images of the lung tissue. The regional compliance was then defined as the ratio of the ventilation and stress values and was calculated for each phase. Based on iterative FEM simulations, the potential range of the mechanical parameters for the lung was determined by comparing the model-computed average stress to the clinical reference value of airway pressure. The effect of the parameter variations on the computed stress distributions was estimated using Pearson correlation coefficients. Results: For the POPI-modeled patient, five exhalation phases from the start to the end of exhalation were denoted by Pi, i=1,…,5, respectively. The average lung volume variation relative to the reference phase (P5) was reduced from 18% at P1 to 4.8% at P4. The average stress at phase Pi was 1.42, 1.34, 0.74, and 0.28 kPa, and the average regional compliance was 0.19, 0.20, 0.20, and 0.24 for i=1,…,4, respectively. For the other five patients, their average Rv value at the end-inhalation phase was 21.1%, 19.6%, 22.4%, 22.5%, and 18.8%, respectively, and the regional compliance averaged over all six patients is 0.2. For elasticity parameters chosen from the potential parameter range, the resultant stress distributions were found to be similar to each other with Pearson correlation coefficients greater than 0.81. Conclusions: A 4DCT-based mechanical model has been developed to calculate the ventilation and stress images of the lung. The resultant regional compliance represents the lung’s elasticity property and is potentially useful in correlating regions of lung damage with radiation dose following a course of radiation therapy. PMID:21520868

  19. Measurement of regional compliance using 4DCT images for assessment of radiation treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong Hualiang; Jin Jianyue; Ajlouni, Munther

    2011-03-15

    Purpose: Radiation-induced damage, such as inflammation and fibrosis, can compromise ventilation capability of local functional units (alveoli) of the lung. Ventilation function as measured with ventilation images, however, is often complicated by the underlying mechanical variations. The purpose of this study is to present a 4DCT-based method to measure the regional ventilation capability, namely, regional compliance, for the evaluation of radiation-induced lung damage. Methods: Six 4DCT images were investigated in this study: One previously used in the generation of a POPI model and the other five acquired at Henry Ford Health System. A tetrahedral geometrical model was created and scaledmore » to encompass each of the 4DCT image domains. Image registrations were performed on each of the 4DCT images using a multiresolution Demons algorithm. The images at the end of exhalation were selected as a reference. Images at other exhalation phases were registered to the reference phase. For the POPI-modeled patient, each of these registration instances was validated using 40 landmarks. The displacement vector fields (DVFs) were used first to calculate the volumetric variation of each tetrahedron, which represents the change in the air volume. The calculated results were interpolated to generate 3D ventilation images. With the computed DVF, a finite element method (FEM) framework was developed to compute the stress images of the lung tissue. The regional compliance was then defined as the ratio of the ventilation and stress values and was calculated for each phase. Based on iterative FEM simulations, the potential range of the mechanical parameters for the lung was determined by comparing the model-computed average stress to the clinical reference value of airway pressure. The effect of the parameter variations on the computed stress distributions was estimated using Pearson correlation coefficients. Results: For the POPI-modeled patient, five exhalation phases from the start to the end of exhalation were denoted by P{sub i}, i=1,...,5, respectively. The average lung volume variation relative to the reference phase (P{sub 5}) was reduced from 18% at P{sub 1} to 4.8% at P{sub 4}. The average stress at phase P{sub i} was 1.42, 1.34, 0.74, and 0.28 kPa, and the average regional compliance was 0.19, 0.20, 0.20, and 0.24 for i=1,...,4, respectively. For the other five patients, their average R{sub v} value at the end-inhalation phase was 21.1%, 19.6%, 22.4%, 22.5%, and 18.8%, respectively, and the regional compliance averaged over all six patients is 0.2. For elasticity parameters chosen from the potential parameter range, the resultant stress distributions were found to be similar to each other with Pearson correlation coefficients greater than 0.81. Conclusions: A 4DCT-based mechanical model has been developed to calculate the ventilation and stress images of the lung. The resultant regional compliance represents the lung's elasticity property and is potentially useful in correlating regions of lung damage with radiation dose following a course of radiation therapy.« less

  20. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  1. Water quality modeling in the dead end sections of drinking water (Supplement)

    EPA Pesticide Factsheets

    Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used tocalibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variation

  2. Water Quality Modeling in the Dead End Sections of Drinking ...

    EPA Pesticide Factsheets

    Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of a distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations

  3. Oxygen consumption rate of cells in 3D culture: the use of experiment and simulation to measure kinetic parameters and optimise culture conditions.

    PubMed

    Streeter, Ian; Cheema, Umber

    2011-10-07

    Understanding the basal O(2) and nutrient requirements of cells is paramount when culturing cells in 3D tissue models. Any scaffold design will need to take such parameters into consideration, especially as the addition of cells introduces gradients of consumption of such molecules from the surface to the core of scaffolds. We have cultured two cell types in 3D native collagen type I scaffolds, and measured the O(2) tension at specific locations within the scaffold. By changing the density of cells, we have established O(2) consumption gradients within these scaffolds and using mathematical modeling have derived rates of consumption for O(2). For human dermal fibroblasts the average rate constant was 1.19 × 10(-17) mol cell(-1) s(-1), and for human bone marrow derived stromal cells the average rate constant was 7.91 × 10(-18) mol cell(-1) s(-1). These values are lower than previously published rates for similar cells cultured in 2D, but the values established in this current study are more representative of rates of consumption measured in vivo. These values will dictate 3D culture parameters, including maximum cell-seeding density and maximum size of the constructs, for long-term viability of tissue models.

  4. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  5. Volatility measurement with directional change in Chinese stock market: Statistical property and investment strategy

    NASA Astrophysics Data System (ADS)

    Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei

    2017-04-01

    The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.

  6. Ion thruster performance model

    NASA Technical Reports Server (NTRS)

    Brophy, J. R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.

  7. Modal description—A better way of characterizing human vibration behavior

    NASA Astrophysics Data System (ADS)

    Rützel, Sebastian; Hinz, Barbara; Wölfel, Horst Peter

    2006-12-01

    Biodynamic responses to whole body vibrations are usually characterized in terms of transfer functions, such as impedance or apparent mass. Data measurements from subjects are averaged and analyzed with respect to certain attributes (anthropometrics, posture, excitation intensity, etc.). Averaging involves the risk of identifying unnatural vibration characteristics. The use of a modal description as an alternative method is presented and its contribution to biodynamic modelling is discussed. Modal description is not limited to just one biodynamic function: The method holds for all transfer functions. This is shown in terms of the apparent mass and the seat-to-head transfer function. The advantages of modal description are illustrated using apparent mass data of six male individuals of the same mass percentile. From experimental data, modal parameters such as natural frequencies, damping ratios and modal masses are identified which can easily be used to set up a mathematical model. Following the phenomenological approach, this model will provide the global vibration behavior relating to the input data. The modal description could be used for the development of hardware vibration dummies. With respect to software models such as finite element models, the validation process for these models can be supported by the modal approach. Modal parameters of computational models and of the experimental data can establish a basis for comparison.

  8. Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Sharpe, Jacob A.

    2014-01-01

    A code for predicting supersonic jet broadband shock-associated noise was assessed us- ing a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify de ciencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the mea- sured data, a sensitivity analysis of the model parameters with emphasis on the de nition of the convection velocity parameter, and a least-squares t of the predicted to the mea- sured shock-associated noise component spectra, resulted in a new de nition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.

  9. Application of Self-Similarity Constrained Reynolds-Averaged Turbulence Models to Rayleigh-Taylor and Richtmyer-Meshkov Unstable Turbulent Mixing

    NASA Astrophysics Data System (ADS)

    Hartland, Tucker A.; Schilling, Oleg

    2016-11-01

    Analytical self-similar solutions corresponding to Rayleigh-Taylor, Richtmyer-Meshkov and Kelvin-Helmholtz instability are combined with observed values of the growth parameters in these instabilities to derive coefficient sets for K- ɛ and K- L- a Reynolds-averaged turbulence models. It is shown that full numerical solutions of the model equations give mixing layer widths, fields, and budgets in good agreement with the corresponding self-similar quantities for small Atwood number. Both models are then applied to Rayleigh-Taylor instability with increasing density contrasts to estimate the Atwood number above which the self-similar solutions become invalid. The models are also applied to a reshocked Richtmyer-Meshkov instability, and the predictions are compared with data. The expressions for the growth parameters obtained from the similarity analysis are used to develop estimates for the sensitivity of their values to changes in important model coefficients. Numerical simulations using these modified coefficient values are then performed to provide bounds on the model predictions associated with uncertainties in these coefficient values. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This work was supported by the 2016 LLNL High-Energy-Density Physics Summer Student Program.

  10. Analysis of Geothermal Reservoir and Well Operational Conditions using Monthly Production Reports from Nevada and California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckers, Koenraad J; Young, Katherine R; Johnston, Henry

    When conducting techno-economic analysis of geothermal systems, assumptions are typically necessary for reservoir and wellbore parameters such as producer/injector well ratio, production temperature drawdown, and production/injection temperature, pressure and flow rate. To decrease uncertainty of several of these parameters, we analyzed field data reported by operators in monthly production reports. This paper presents results of a statistical analysis conducted on monthly production reports at 19 power plants in California and Nevada covering 196 production wells and 175 injection wells. The average production temperature was 304 degrees F (151 degrees C) for binary plants and 310 degrees F (154 degrees C)more » for flash plants. The average injection temperature was 169 degrees F (76 degrees C) for binary plants and 173 degrees F (78 degrees C) for flash plants. The average production temperature drawdown was 0.5% per year for binary plants and 0.8% per year for flash plants. The average production well flow rate was 112 L/s for binary plant wells and 62 L/s for flash plant wells. For all 19 plants combined, the median injectivity index value was 3.8 L/s/bar, and the average producer/injector well ratio was 1.6. As an additional example of analysis using data from monthly production reports, a coupled reservoir-wellbore model was developed to derive productivity curves at various pump horsepower settings. The workflow and model were applied to two example production wells.« less

  11. Langevin equation with fluctuating diffusivity: A two-state model

    NASA Astrophysics Data System (ADS)

    Miyaguchi, Tomoshige; Akimoto, Takuma; Yamamoto, Eiji

    2016-07-01

    Recently, anomalous subdiffusion, aging, and scatter of the diffusion coefficient have been reported in many single-particle-tracking experiments, though the origins of these behaviors are still elusive. Here, as a model to describe such phenomena, we investigate a Langevin equation with diffusivity fluctuating between a fast and a slow state. Namely, the diffusivity follows a dichotomous stochastic process. We assume that the sojourn time distributions of these two states are given by power laws. It is shown that, for a nonequilibrium ensemble, the ensemble-averaged mean-square displacement (MSD) shows transient subdiffusion. In contrast, the time-averaged MSD shows normal diffusion, but an effective diffusion coefficient transiently shows aging behavior. The propagator is non-Gaussian for short time and converges to a Gaussian distribution in a long-time limit; this convergence to Gaussian is extremely slow for some parameter values. For equilibrium ensembles, both ensemble-averaged and time-averaged MSDs show only normal diffusion and thus we cannot detect any traces of the fluctuating diffusivity with these MSDs. Therefore, as an alternative approach to characterizing the fluctuating diffusivity, the relative standard deviation (RSD) of the time-averaged MSD is utilized and it is shown that the RSD exhibits slow relaxation as a signature of the long-time correlation in the fluctuating diffusivity. Furthermore, it is shown that the RSD is related to a non-Gaussian parameter of the propagator. To obtain these theoretical results, we develop a two-state renewal theory as an analytical tool.

  12. What’s Driving Uncertainty? The Model or the Model Parameters (What’s Driving Uncertainty? The influences of model and model parameters in data analysis)

    DOE PAGES

    Anderson-Cook, Christine Michaela

    2017-03-01

    Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less

  13. Forcing a three-dimensional, hydrostatic, primitive-equation model for application in the surf zone: 2. Application to DUCK94

    NASA Astrophysics Data System (ADS)

    Newberger, P. A.; Allen, J. S.

    2007-08-01

    A three-dimensional primitive-equation model for application to the nearshore surf zone has been developed. This model, an extension of the Princeton Ocean Model (POM), predicts the wave-averaged circulation forced by breaking waves. All of the features of the original POM are retained in the extended model so that applications can be made to regions where breaking waves, stratification, rotation, and wind stress make significant contributions to the flow behavior. In this study we examine the effects of breaking waves and wind stress. The nearshore POM circulation model is embedded within the NearCom community model and is coupled with a wave model. This combined modeling system is applied to the nearshore surf zone off Duck, North Carolina, during the DUCK94 field experiment of October 1994. Model results are compared to observations from this experiment, and the effects of parameter choices are examined. A process study examining the effects of tidal depth variation on depth-dependent wave-averaged currents is carried out. With identical offshore wave conditions and model parameters, the strength and spatial structure of the undertow and of the alongshore current vary systematically with water depth. Some three-dimensional solutions show the development of shear instabilities of the alongshore current. Inclusion of wave-current interactions makes an appreciable difference in the characteristics of the instability.

  14. Is there a `universal' dynamic zero-parameter hydrological model? Evaluation of a dynamic Budyko model in US and India

    NASA Astrophysics Data System (ADS)

    Patnaik, S.; Biswal, B.; Sharma, V. C.

    2017-12-01

    River flow varies greatly in space and time, and the single biggest challenge for hydrologists and ecologists around the world is the fact that most rivers are either ungauged or poorly gauged. Although it is relatively easier to predict long-term average flow of a river using the `universal' zero-parameter Budyko model, lack of data hinders short-term flow prediction at ungauged locations using traditional hydrological models as they require observed flow data for model calibration. Flow prediction in ungauged basins thus requires a dynamic 'zero-parameter' hydrological model. One way to achieve this is to regionalize a dynamic hydrological model's parameters. However, a regionalization method based zero-parameter dynamic hydrological model is not `universal'. An alternative attempt was made recently to develop a zero-parameter dynamic model by defining an instantaneous dryness index as a function of antecedent rainfall and solar energy inputs with the help of a decay function and using the original Budyko function. The model was tested first in 63 US catchments and later in 50 Indian catchments. The median Nash-Sutcliffe efficiency (NSE) was found to be close to 0.4 in both the cases. Although improvements need to be incorporated in order to use the model for reliable prediction, the main aim of this study was to rather understand hydrological processes. The overall results here seem to suggest that the dynamic zero-parameter Budyko model is `universal.' In other words natural catchments around the world are strikingly similar to each other in the way they respond to hydrologic inputs; we thus need to focus more on utilizing catchment similarities in hydrological modelling instead of over parameterizing our models.

  15. Improvement to microphysical schemes in WRF Model based on observed data, part I: size distribution function

    NASA Astrophysics Data System (ADS)

    Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.

    2015-12-01

    In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit to 1-3-5 moments of the SDF was possible compared to fitting the 0-3-6 moment group.

  16. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less

  17. Protein solubility modeling

    NASA Technical Reports Server (NTRS)

    Agena, S. M.; Pusey, M. L.; Bogle, I. D.

    1999-01-01

    A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. Copyright 1999 John Wiley & Sons, Inc.

  18. Energy relaxation of intense laser pulse-produced plasmas

    NASA Astrophysics Data System (ADS)

    Shihab, M.; Abou-Koura, G. H.; El-Siragy, N. M.

    2016-05-01

    We describe a collisional radiative model (CRE) of homogeneously expanded nickel plasmas in vacuum. The CRE model is coupled with two separate electron and ion temperature magneto-hydrodynamic equations. On the output, the model provides the temporal variation of the electron temperature, ion temperature, and average charge state. We demonstrate the effect of three-body recombination ({∝}N_e T^{-9/2}_e) on plasma parameters, as it changes the time dependence of electron temperature from t^{-2} to t^{-1} and exhibits a pronounced effect leading to a freezing feature in the average charge state. In addition, the effect of the three-body recombination on the warm up of ions and delaying the equilibration is addressed.

  19. Constraining brane tension using rotation curves of galaxies

    NASA Astrophysics Data System (ADS)

    García-Aspeitia, Miguel A.; Rodríguez-Meza, Mario A.

    2018-04-01

    We present in this work a study of brane theory phenomenology focusing on the brane tension parameter, which is the main observable of the theory. We show the modifications steaming from the presence of branes in the rotation curves of spiral galaxies for three well known dark matter density profiles: Pseudo isothermal, Navarro-Frenk-White and Burkert dark matter density profiles. We estimate the brane tension parameter using a sample of high resolution observed rotation curves of low surface brightness spiral galaxies and a synthetic rotation curve for the three density profiles. Also, the fittings using the brane theory model of the rotation curves are compared with standard Newtonian models. We found that Navarro-Frenk-White model prefers lower values of the brane tension parameter, on the average λ ∼ 0.73 × 10‑3eV4, therefore showing clear brane effects. Burkert case does prefer higher values of the tension parameter, on the average λ ∼ 0.93 eV4 ‑ 46 eV4, i.e., negligible brane effects. Whereas pseudo isothermal is an intermediate case. Due to the low densities found in the galactic medium it is almost impossible to find evidence of the presence of extra dimensions. In this context, we found that our results show weaker bounds to the brane tension values in comparison with other bounds found previously, as the lower value found for dwarf stars composed of a polytropic equation of state, λ ≈ 104 MeV4.

  20. Factorization and reduction methods for optimal control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Powers, R. K.

    1985-01-01

    A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.

  1. Stiffness and relaxation components of the exponential and logistic time constants may be used to derive a load-independent index of isovolumic pressure decay.

    PubMed

    Shmuylovich, Leonid; Kovács, Sándor J

    2008-12-01

    In current practice, empirical parameters such as the monoexponential time constant tau or the logistic model time constant tauL are used to quantitate isovolumic relaxation. Previous work indicates that tau and tauL are load dependent. A load-independent index of isovolumic pressure decline (LIIIVPD) does not exist. In this study, we derive and validate a LIIIVPD. Recently, we have derived and validated a kinematic model of isovolumic pressure decay (IVPD), where IVPD is accurately predicted by the solution to an equation of motion parameterized by stiffness (Ek), relaxation (tauc), and pressure asymptote (Pinfinity) parameters. In this study, we use this kinematic model to predict, derive, and validate the load-independent index MLIIIVPD. We predict that the plot of lumped recoil effects [Ek.(P*max-Pinfinity)] versus resistance effects [tauc.(dP/dtmin)], defined by a set of load-varying IVPD contours, where P*max is maximum pressure and dP/dtmin is the minimum first derivative of pressure, yields a linear relation with a constant (i.e., load independent) slope MLIIIVPD. To validate the load independence, we analyzed an average of 107 IVPD contours in 25 subjects (2,669 beats total) undergoing diagnostic catheterization. For the group as a whole, we found the Ek.(P*max-Pinfinity) versus tauc.(dP/dtmin) relation to be highly linear, with the average slope MLIIIVPD=1.107+/-0.044 and the average r2=0.993+/-0.006. For all subjects, MLIIIVPD was found to be linearly correlated to the subject averaged tau (r2=0.65), tauL(r2=0.50), and dP/dtmin (r2=0.63), as well as to ejection fraction (r2=0.52). We conclude that MLIIIVPD is a LIIIVPD because it is load independent and correlates with conventional IVPD parameters. Further validation of MLIIIVPD in selected pathophysiological settings is warranted.

  2. Prediction of surface roughness in turning of Ti-6Al-4V using cutting parameters, forces and tool vibration

    NASA Astrophysics Data System (ADS)

    Sahu, Neelesh Kumar; Andhare, Atul B.; Andhale, Sandip; Raju Abraham, Roja

    2018-04-01

    Present work deals with prediction of surface roughness using cutting parameters along with in-process measured cutting force and tool vibration (acceleration) during turning of Ti-6Al-4V with cubic boron nitride (CBN) inserts. Full factorial design is used for design of experiments using cutting speed, feed rate and depth of cut as design variables. Prediction model for surface roughness is developed using response surface methodology with cutting speed, feed rate, depth of cut, resultant cutting force and acceleration as control variables. Analysis of variance (ANOVA) is performed to find out significant terms in the model. Insignificant terms are removed after performing statistical test using backward elimination approach. Effect of each control variables on surface roughness is also studied. Correlation coefficient (R2 pred) of 99.4% shows that model correctly explains the experiment results and it behaves well even when adjustment is made in factors or new factors are added or eliminated. Validation of model is done with five fresh experiments and measured forces and acceleration values. Average absolute error between RSM model and experimental measured surface roughness is found to be 10.2%. Additionally, an artificial neural network model is also developed for prediction of surface roughness. The prediction results of modified regression model are compared with ANN. It is found that RSM model and ANN (average absolute error 7.5%) are predicting roughness with more than 90% accuracy. From the results obtained it is found that including cutting force and vibration for prediction of surface roughness gives better prediction than considering only cutting parameters. Also, ANN gives better prediction over RSM models.

  3. A Stochastic Fractional Dynamics Model of Rainfall Statistics

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun; Travis, James

    2013-04-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.

  4. Genetic Analysis of Milk Yield in First-Lactation Holstein Friesian in Ethiopia: A Lactation Average vs Random Regression Test-Day Model Analysis

    PubMed Central

    Meseret, S.; Tamir, B.; Gebreyohannes, G.; Lidauer, M.; Negussie, E.

    2015-01-01

    The development of effective genetic evaluations and selection of sires requires accurate estimates of genetic parameters for all economically important traits in the breeding goal. The main objective of this study was to assess the relative performance of the traditional lactation average model (LAM) against the random regression test-day model (RRM) in the estimation of genetic parameters and prediction of breeding values for Holstein Friesian herds in Ethiopia. The data used consisted of 6,500 test-day (TD) records from 800 first-lactation Holstein Friesian cows that calved between 1997 and 2013. Co-variance components were estimated using the average information restricted maximum likelihood method under single trait animal model. The estimate of heritability for first-lactation milk yield was 0.30 from LAM whilst estimates from the RRM model ranged from 0.17 to 0.29 for the different stages of lactation. Genetic correlations between different TDs in first-lactation Holstein Friesian ranged from 0.37 to 0.99. The observed genetic correlation was less than unity between milk yields at different TDs, which indicated that the assumption of LAM may not be optimal for accurate evaluation of the genetic merit of animals. A close look at estimated breeding values from both models showed that RRM had higher standard deviation compared to LAM indicating that the TD model makes efficient utilization of TD information. Correlations of breeding values between models ranged from 0.90 to 0.96 for different group of sires and cows and marked re-rankings were observed in top sires and cows in moving from the traditional LAM to RRM evaluations. PMID:26194217

  5. Evaluation of a pulmonary strain model by registration of dynamic CT scans

    NASA Astrophysics Data System (ADS)

    Pomeroy, Marc; Liang, Zhengrong; Brehm, Anthony

    2017-03-01

    Idiopathic pulmonary fibrosis (IPF) is a chronic fibrotic lung disease that develops in adults without any known cause. It is an interstitial lung disease in which the lung tissue becomes scarred and stiffens, ultimately leading to respiratory failure. This disease currently has no cure with limited treatment options, leading to an average survival time of 3-5 years after diagnosis. In this paper we employ a mathematical model simulating the lung parenchyma as hexagons with elastic forces applied to connecting vertices and opposing vertices. Using an image registration algorithm, we obtain trajectories of 4D-CT scans of a healthy patient, and one suffering from IPF. Converting the image trajectories into a hexagonal lattice, we fit the model parameters to match the respiratory motion seen for both patients across multiple image slices. We found the model could decently describe the healthy lung slices, with a minimum average error between corresponding vertices to be 1.66 mm. For the fibrotic lung slices the model was less accurate, maintaining a higher average error across all slices. Using the optimized parameters, we apply the forces predicted from the model using the image trajectory positions for each phase. Although the error is large, the spring constant values determined for the fibrotic patient were not as high as we expected, and more often than not determined to be lower than corresponding healthy lung slices. However, the net force distribution for some of those slices was still found to be greater than the healthy lung counterparts. Other modifications to the model, including additional directional components and which vertices were receiving with the limited sample size available, a clear distinction between the healthy and fibrotic lung cannot yet be made by this model.

  6. Comparing Satellite Rainfall Estimates with Rain-Gauge Data: Optimal Strategies Suggested by a Spectral Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Validation of satellite remote-sensing methods for estimating rainfall against rain-gauge data is attractive because of the direct nature of the rain-gauge measurements. Comparisons of satellite estimates to rain-gauge data are difficult, however, because of the extreme variability of rain and the fact that satellites view large areas over a short time while rain gauges monitor small areas continuously. In this paper, a statistical model of rainfall variability developed for studies of sampling error in averages of satellite data is used to examine the impact of spatial and temporal averaging of satellite and gauge data on intercomparison results. The model parameters were derived from radar observations of rain, but the model appears to capture many of the characteristics of rain-gauge data as well. The model predicts that many months of data from areas containing a few gauges are required to validate satellite estimates over the areas, and that the areas should be of the order of several hundred km in diameter. Over gauge arrays of sufficiently high density, the optimal areas and averaging times are reduced. The possibility of using time-weighted averages of gauge data is explored.

  7. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  8. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE PAGES

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  9. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  10. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-07-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  11. Are Earthquake Clusters/Supercycles Real or Random?

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2016-12-01

    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so models can be strongly or weakly time-dependent.

  12. The contact heat transfer between the heating plate and granular materials in rotary heat exchanger under overloaded condition

    NASA Astrophysics Data System (ADS)

    Duan, Luanfang; Qi, Chonggang; Ling, Xiang; Peng, Hao

    2018-03-01

    In the present work, the contact heat transfer between the granular materials and heating plates inside plate rotary heat exchanger (PRHE) was investigated. The heat transfer coefficient is dominated by the contact heat transfer coefficient at hot wall surface of the heating plates and the heat penetration inside the solid bed. A plot scale PRHE with a diameter of Do = 273 mm and a length of L = 1000 mm has been established. Quartz sand with dp = 2 mm was employed as the experimental material. The operational parameters were in the range of ω = 1 - 8 rpm, and F = 15, 20, 25, 30%, and the effect of these parameters on the time-average contact heat transfer coefficient was analyzed. The time-average contact heat transfer coefficient increases with the increase of rotary speed, but decreases with the increase of the filling degree. The measured data of time-average heat transfer coefficients were compared with theoretical calculations from Schlünder's model, a good agreement between the measurements and the model could be achieved, especially at a lower rotary speed and filling degree level. The maximum deviation between the calculated data and the experimental data is approximate 10%.

  13. a Bayesian Synthesis of Predictions from Different Models for Setting Water Quality Criteria

    NASA Astrophysics Data System (ADS)

    Arhonditsis, G. B.; Ecological Modelling Laboratory

    2011-12-01

    Skeptical views of the scientific value of modelling argue that there is no true model of an ecological system, but rather several adequate descriptions of different conceptual basis and structure. In this regard, rather than picking the single "best-fit" model to predict future system responses, we can use Bayesian model averaging to synthesize the forecasts from different models. Hence, by acknowledging that models from different areas of the complexity spectrum have different strengths and weaknesses, the Bayesian model averaging is an appealing approach to improve the predictive capacity and to overcome the ambiguity surrounding the model selection or the risk of basing ecological forecasts on a single model. Our study addresses this question using a complex ecological model, developed by Ramin et al. (2011; Environ Modell Softw 26, 337-353) to guide the water quality criteria setting process in the Hamilton Harbour (Ontario, Canada), along with a simpler plankton model that considers the interplay among phosphate, detritus, and generic phytoplankton and zooplankton state variables. This simple approach is more easily subjected to detailed sensitivity analysis and also has the advantage of fewer unconstrained parameters. Using Markov Chain Monte Carlo simulations, we calculate the relative mean standard error to assess the posterior support of the two models from the existing data. Predictions from the two models are then combined using the respective standard error estimates as weights in a weighted model average. The model averaging approach is used to examine the robustness of predictive statements made from our earlier work regarding the response of Hamilton Harbour to the different nutrient loading reduction strategies. The two eutrophication models are then used in conjunction with the SPAtially Referenced Regressions On Watershed attributes (SPARROW) watershed model. The Bayesian nature of our work is used: (i) to alleviate problems of spatiotemporal resolution mismatch between watershed and receiving waterbody models; and (ii) to overcome the conceptual or scale misalignment between processes of interest and supporting information. The proposed Bayesian approach provides an effective means of empirically estimating the relation between in-stream measurements of nutrient fluxes and the sources/sinks of nutrients within the watershed, while explicitly accounting for the uncertainty associated with the existing knowledge from the system along with the different types of spatial correlation typically underlying the parameter estimation of watershed models. Our modelling exercise offers the first estimates of the export coefficients and the delivery rates from the different subcatchments and thus generates testable hypotheses regarding the nutrient export "hot spots" in the studied watershed. Finally, we conduct modeling experiments that evaluate the potential improvement of the model parameter estimates and the decrease of the predictive uncertainty, if the uncertainty associated with the contemporary nutrient loading estimates is reduced. The lessons learned from this study will contribute towards the development of integrated modelling frameworks.

  14. Relative importance of first and second derivatives of nuclear magnetic resonance chemical shifts and spin-spin coupling constants for vibrational averaging.

    PubMed

    Dracínský, Martin; Kaminský, Jakub; Bour, Petr

    2009-03-07

    Relative importance of anharmonic corrections to molecular vibrational energies, nuclear magnetic resonance (NMR) chemical shifts, and J-coupling constants was assessed for a model set of methane derivatives, differently charged alanine forms, and sugar models. Molecular quartic force fields and NMR parameter derivatives were obtained quantum mechanically by a numerical differentiation. In most cases the harmonic vibrational function combined with the property second derivatives provided the largest correction of the equilibrium values, while anharmonic corrections (third and fourth energy derivatives) were found less important. The most computationally expensive off-diagonal quartic energy derivatives involving four different coordinates provided a negligible contribution. The vibrational corrections of NMR shifts were small and yielded a convincing improvement only for very accurate wave function calculations. For the indirect spin-spin coupling constants the averaging significantly improved already the equilibrium values obtained at the density functional theory level. Both first and complete second shielding derivatives were found important for the shift corrections, while for the J-coupling constants the vibrational parts were dominated by the diagonal second derivatives. The vibrational corrections were also applied to some isotopic effects, where the corrected values reasonably well reproduced the experiment, but only if a full second-order expansion of the NMR parameters was included. Contributions of individual vibrational modes for the averaging are discussed. Similar behavior was found for the methane derivatives, and for the larger and polar molecules. The vibrational averaging thus facilitates interpretation of previous experimental results and suggests that it can make future molecular structural studies more reliable. Because of the lengthy numerical differentiation required to compute the NMR parameter derivatives their analytical implementation in future quantum chemistry packages is desirable.

  15. A microprocessor based high speed packet switch for satellite communications

    NASA Technical Reports Server (NTRS)

    Arozullah, M.; Crist, S. C.

    1980-01-01

    The architectures of a single processor, a three processor, and a multiple processor system are described. The hardware circuits, and software routines required for implementing the three and multiple processor designs are presented. A bit-slice microprocessor was designed and microprogrammed. Maximum throughput was calculated for all three designs. Queue theoretic models for these three designs were developed and utilized to obtain analytical expressions for the average waiting times, overall average response times and average queue sizes. From these expressions, graphs were obtained showing the effect on the system performance of a number of design parameters.

  16. Determination of the Global-Average Charge Moment of a Lightning Flash Using Schumann Resonances and the LIS/OTD Lightning Data

    NASA Astrophysics Data System (ADS)

    Boldi, Robert; Williams, Earle; Guha, Anirban

    2018-01-01

    In this paper, we use (1) the 20 year record of Schumann resonance (SR) signals measured at West Greenwich Rhode Island, USA, (2) the 19 year Lightning Imaging Sensor (LIS)/Optical Transient Detector (OTD) lightning data, and (3) the normal mode equations for a uniform cavity model to quantify the relationship between the observed Schumann resonance modal intensity and the global-average vertical charge moment change M (C km) per lightning flash. This work, by integrating SR measurements with satellite-based optical measurements of global flash rate, accomplishes this quantification for the first time. To do this, we first fit the intensity spectra of the observed SR signals to an eight-mode, three parameter per mode, (symmetric) Lorentzian line shape model. Next, using the LIS/OTD lightning data and the normal mode equations for a uniform cavity model, we computed the expected climatological-daily-average intensity spectra. We then regressed the observed modal intensity values against the expected modal intensity values to find the best fit value of the global-average vertical charge moment change of a lightning flash (M) to be 41 C km per flash with a 99% confidence interval of ±3.9 C km per flash, independent of mode. Mode independence argues that the model adequately captured the modal intensity, the most important fit parameter herein considered. We also tested this relationship for the presence of residual modal intensity at zero lightning flashes per second and found no evidence that modal intensity is significantly different than zero at zero lightning flashes per second, setting an upper limit to the amount of nonlightning contributions to the observed modal intensity.

  17. Effects of temporal and spatial resolution of calibration data on integrated hydrologic water quality model identification

    NASA Astrophysics Data System (ADS)

    Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael

    2014-05-01

    Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global search and Bayesian inference schemes.

  18. Variance-based selection may explain general mating patterns in social insects.

    PubMed

    Rueppell, Olav; Johnson, Nels; Rychtár, Jan

    2008-06-23

    Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.

  19. History dependent quantum walk on the cycle with an unbalanced coin

    NASA Astrophysics Data System (ADS)

    Krawec, Walter O.

    2015-06-01

    Recently, a new model of quantum walk, utilizing recycled coins, was introduced; however little is yet known about its properties. In this paper, we study its behavior on the cycle graph. In particular, we will consider its time averaged distribution and how it is affected by the walk's "memory parameter"-a real parameter, between zero and eight, which affects the walk's coin flip operator. Despite an infinite number of different parameters, our analysis provides evidence that only a few produce non-uniform behavior. Our analysis also shows that the initial state, and cycle size modulo four all affect the behavior of this walk. We also prove an interesting relationship between the recycled coin model and a different memory-based quantum walk recently proposed.

  20. Susceptible-infected-susceptible epidemics on networks with general infection and cure times.

    PubMed

    Cator, E; van de Bovenkamp, R; Van Mieghem, P

    2013-06-01

    The classical, continuous-time susceptible-infected-susceptible (SIS) Markov epidemic model on an arbitrary network is extended to incorporate infection and curing or recovery times each characterized by a general distribution (rather than an exponential distribution as in Markov processes). This extension, called the generalized SIS (GSIS) model, is believed to have a much larger applicability to real-world epidemics (such as information spread in online social networks, real diseases, malware spread in computer networks, etc.) that likely do not feature exponential times. While the exact governing equations for the GSIS model are difficult to deduce due to their non-Markovian nature, accurate mean-field equations are derived that resemble our previous N-intertwined mean-field approximation (NIMFA) and so allow us to transfer the whole analytic machinery of the NIMFA to the GSIS model. In particular, we establish the criterion to compute the epidemic threshold in the GSIS model. Moreover, we show that the average number of infection attempts during a recovery time is the more natural key parameter, instead of the effective infection rate in the classical, continuous-time SIS Markov model. The relative simplicity of our mean-field results enables us to treat more general types of SIS epidemics, while offering an easier key parameter to measure the average activity of those general viral agents.

  1. Susceptible-infected-susceptible epidemics on networks with general infection and cure times

    NASA Astrophysics Data System (ADS)

    Cator, E.; van de Bovenkamp, R.; Van Mieghem, P.

    2013-06-01

    The classical, continuous-time susceptible-infected-susceptible (SIS) Markov epidemic model on an arbitrary network is extended to incorporate infection and curing or recovery times each characterized by a general distribution (rather than an exponential distribution as in Markov processes). This extension, called the generalized SIS (GSIS) model, is believed to have a much larger applicability to real-world epidemics (such as information spread in online social networks, real diseases, malware spread in computer networks, etc.) that likely do not feature exponential times. While the exact governing equations for the GSIS model are difficult to deduce due to their non-Markovian nature, accurate mean-field equations are derived that resemble our previous N-intertwined mean-field approximation (NIMFA) and so allow us to transfer the whole analytic machinery of the NIMFA to the GSIS model. In particular, we establish the criterion to compute the epidemic threshold in the GSIS model. Moreover, we show that the average number of infection attempts during a recovery time is the more natural key parameter, instead of the effective infection rate in the classical, continuous-time SIS Markov model. The relative simplicity of our mean-field results enables us to treat more general types of SIS epidemics, while offering an easier key parameter to measure the average activity of those general viral agents.

  2. Preliminary estimates of Gulf Stream characteristics from TOPEX data and a precise gravimetric geoid

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.; Smith, Dru A.

    1994-01-01

    TOPEX sea surface height data has been used, with a gravimetric geoid, to calculate sea surface topography across the Gulf Stream. This topography was initially computed for nine tracks on cycles 21 to 29. Due to inaccurate geoid undulations on one track, results for eight tracks are reported. The sea surface topography estimates were used to calculate parameters that describe Gulf Stream characteristics from two models of the Gulf Stream. One model was based on a Gaussian representation of the velocity while the other was a hyperbolic representation of velocity or the sea surface topography. The parameters of the Gaussian velocity model fit were a width parameter, a maximum velocity value, and the location of the maximum velocity. The parameters of the hyperbolic sea surface topography model were the width, the height jump, position, and sea surface topography at the center of the stream. Both models were used for the eight tracks and nine cycles studied. Comparisons were made between the width parameters, the maximum velocities, and the height jumps. Some of the parameter estimates were found to be highly (0.9) correlated when the hyperbolic sea surface topography fit was carried out, but such correlations were reduced for either the Gaussian velocity fits or the hyperbolic velocity model fit. A comparison of the parameters derived from 1-year TOPEX data showed good agreement with values derived by Kelly (1991) using 2.5 years of Geosat data near 38 deg N, 66 deg W longitude. Accuracy of the geoid undulations used in the calculations was of order of +/- 16 cm with the accuracy of a geoid undulation difference equal to +/- 15 cm over a 100-km line in areas with good terrestrial data coverage. This paper demonstrates that our knowledge or geoid undulations and undulation differences, in a portion of the Gulf Stream region, is sufficiently accurate to determine characteristics of the jet when used with TOPEX altimeter data. The method used here has not been shown to be more accurate than methods that average altimeter data to form a reference surface used in analysis to obtain the Gulf Stream characteristics. However, the results show the geoid approach may be used in areas where lack of current meandering reduces the accuracy of the average surface procedure.

  3. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  4. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    PubMed

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.

  5. Material Properties from Air Puff Corneal Deformation by Numerical Simulations on Model Corneas.

    PubMed

    Bekesi, Nandor; Dorronsoro, Carlos; de la Hoz, Andrés; Marcos, Susana

    2016-01-01

    To validate a new method for reconstructing corneal biomechanical properties from air puff corneal deformation images using hydrogel polymer model corneas and porcine corneas. Air puff deformation imaging was performed on model eyes with artificial corneas made out of three different hydrogel materials with three different thicknesses and on porcine eyes, at constant intraocular pressure of 15 mmHg. The cornea air puff deformation was modeled using finite elements, and hyperelastic material parameters were determined through inverse modeling, minimizing the difference between the simulated and the measured central deformation amplitude and central-peripheral deformation ratio parameters. Uniaxial tensile tests were performed on the model cornea materials as well as on corneal strips, and the results were compared to stress-strain simulations assuming the reconstructed material parameters. The measured and simulated spatial and temporal profiles of the air puff deformation tests were in good agreement (< 7% average discrepancy). The simulated stress-strain curves of the studied hydrogel corneal materials fitted well the experimental stress-strain curves from uniaxial extensiometry, particularly in the 0-0.4 range. Equivalent Young´s moduli of the reconstructed material properties from air-puff were 0.31, 0.58 and 0.48 MPa for the three polymer materials respectively which differed < 1% from those obtained from extensiometry. The simulations of the same material but different thickness resulted in similar reconstructed material properties. The air-puff reconstructed average equivalent Young´s modulus of the porcine corneas was 1.3 MPa, within 18% of that obtained from extensiometry. Air puff corneal deformation imaging with inverse finite element modeling can retrieve material properties of model hydrogel polymer corneas and real corneas, which are in good correspondence with those obtained from uniaxial extensiometry, suggesting that this is a promising technique to retrieve quantitative corneal biomechanical properties.

  6. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  7. A comparison between a new model and current models for estimating trunk segment inertial parameters.

    PubMed

    Wicke, Jason; Dumas, Genevieve A; Costigan, Patrick A

    2009-01-05

    Modeling of the body segments to estimate segment inertial parameters is required in the kinetic analysis of human motion. A new geometric model for the trunk has been developed that uses various cross-sectional shapes to estimate segment volume and adopts a non-uniform density function that is gender-specific. The goal of this study was to test the accuracy of the new model for estimating the trunk's inertial parameters by comparing it to the more current models used in biomechanical research. Trunk inertial parameters estimated from dual X-ray absorptiometry (DXA) were used as the standard. Twenty-five female and 24 male college-aged participants were recruited for the study. Comparisons of the new model to the accepted models were accomplished by determining the error between the models' trunk inertial estimates and that from DXA. Results showed that the new model was more accurate across all inertial estimates than the other models. The new model had errors within 6.0% for both genders, whereas the other models had higher average errors ranging from 10% to over 50% and were much more inconsistent between the genders. In addition, there was little consistency in the level of accuracy for the other models when estimating the different inertial parameters. These results suggest that the new model provides more accurate and consistent trunk inertial estimates than the other models for both female and male college-aged individuals. However, similar studies need to be performed using other populations, such as elderly or individuals from a distinct morphology (e.g. obese). In addition, the effect of using different models on the outcome of kinetic parameters, such as joint moments and forces needs to be assessed.

  8. Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach

    USGS Publications Warehouse

    Maxwell, R.M.; Welty, C.; Harvey, R.W.

    2007-01-01

    Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured the mean bacteria transport behavior and calculated an envelope of uncertainty that bracketed the observations in most simulation cases. ?? 2007 American Chemical Society.

  9. M-Split: A Graphical User Interface to Analyze Multilayered Anisotropy from Shear Wave Splitting

    NASA Astrophysics Data System (ADS)

    Abgarmi, Bizhan; Ozacar, A. Arda

    2017-04-01

    Shear wave splitting analysis are commonly used to infer deep anisotropic structure. For simple cases, obtained delay times and fast-axis orientations are averaged from reliable results to define anisotropy beneath recording seismic stations. However, splitting parameters show systematic variations with back azimuth in the presence of complex anisotropy and cannot be represented by average time delay and fast axis orientation. Previous researchers had identified anisotropic complexities at different tectonic settings and applied various approaches to model them. Most commonly, such complexities are modeled by using multiple anisotropic layers with priori constraints from geologic data. In this study, a graphical user interface called M-Split is developed to easily process and model multilayered anisotropy with capabilities to properly address the inherited non-uniqueness. M-Split program runs user defined grid searches through the model parameter space for two-layer anisotropy using formulation of Silver and Savage (1994) and creates sensitivity contour plots to locate local maximas and analyze all possible models with parameter tradeoffs. In order to minimize model ambiguity and identify the robust model parameters, various misfit calculation procedures are also developed and embedded to M-Split which can be used depending on the quality of the observations and their back-azimuthal coverage. Case studies carried out to evaluate the reliability of the program using real noisy data and for this purpose stations from two different networks are utilized. First seismic network is the Kandilli Observatory and Earthquake research institute (KOERI) which includes long term running permanent stations and second network comprises seismic stations deployed temporary as part of the "Continental Dynamics-Central Anatolian Tectonics (CD-CAT)" project funded by NSF. It is also worth to note that M-Split is designed as open source program which can be modified by users for additional capabilities or for other applications.

  10. Modeling the Capacitive Deionization Process in Dual-Porosity Electrodes

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2016-04-28

    In many areas of the world, there is a need to increase water availability. Capacitive deionization (CDI) is an electrochemical water treatment process that can be a viable alternative for treating water and for saving energy. A model is presented to simulate the CDI process in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two steps volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. A one-equationmore » model based on the principle of local equilibrium is derived. The constraints determining the range of application of the one-equation model are presented. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. The source terms that appear in the average equations are calculated using theoretical derivations. The global diffusivity is calculated by solving the closure problem.« less

  11. Recognition and characterization of hierarchical interstellar structure. II - Structure tree statistics

    NASA Technical Reports Server (NTRS)

    Houlahan, Padraig; Scalo, John

    1992-01-01

    A new method of image analysis is described, in which images partitioned into 'clouds' are represented by simplified skeleton images, called structure trees, that preserve the spatial relations of the component clouds while disregarding information concerning their sizes and shapes. The method can be used to discriminate between images of projected hierarchical (multiply nested) and random three-dimensional simulated collections of clouds constructed on the basis of observed interstellar properties, and even intermediate systems formed by combining random and hierarchical simulations. For a given structure type, the method can distinguish between different subclasses of models with different parameters and reliably estimate their hierarchical parameters: average number of children per parent, scale reduction factor per level of hierarchy, density contrast, and number of resolved levels. An application to a column density image of the Taurus complex constructed from IRAS data is given. Moderately strong evidence for a hierarchical structural component is found, and parameters of the hierarchy, as well as the average volume filling factor and mass efficiency of fragmentation per level of hierarchy, are estimated. The existence of nested structure contradicts models in which large molecular clouds are supposed to fragment, in a single stage, into roughly stellar-mass cores.

  12. Key parameters of the sediment surface morphodynamics in an estuary - An assessment of model solutions

    NASA Astrophysics Data System (ADS)

    Sampath, D. M. R.; Boski, T.

    2018-05-01

    Large-scale geomorphological evolution of an estuarine system was simulated by means of a hybrid estuarine sedimentation model (HESM) applied to the Guadiana Estuary, in Southwest Iberia. The model simulates the decadal-scale morphodynamics of the system under environmental forcing, using a set of analytical solutions to simplified equations of tidal wave propagation in shallow waters, constrained by empirical knowledge of estuarine sedimentary dynamics and topography. The key controlling parameters of the model are bed friction (f), current velocity power of the erosion rate function (N), and sea-level rise rate. An assessment of sensitivity of the simulated sediment surface elevation (SSE) change to these controlling parameters was performed. The model predicted the spatial differentiation of accretion and erosion, the latter especially marked in the mudflats within mean sea level and low tide level and accretion was mainly in a subtidal channel. The average SSE change mutually depended on both the friction coefficient and power of the current velocity. Analysis of the average annual SSE change suggests that the state of intertidal and subtidal compartments of the estuarine system vary differently according to the dominant processes (erosion and accretion). As the Guadiana estuarine system shows dominant erosional behaviour in the context of sea-level rise and sediment supply reduction after the closure of the Alqueva Dam, the most plausible sets of parameter values for the Guadiana Estuary are N = 1.8 and f = 0.8f0, or N = 2 and f = f0, where f0 is the empirically estimated value. For these sets of parameter values, the relative errors in SSE change did not exceed ±20% in 73% of simulation cells in the studied area. Such a limit of accuracy can be acceptable for an idealized modelling of coastal evolution in response to uncertain sea-level rise scenarios in the context of reduced sediment supply due to flow regulation. Therefore, the idealized but cost-effective HESM model will be suitable for estimating the morphological impacts of sea-level rise on estuarine systems on a decadal timescale.

  13. Modeling annual extreme temperature using generalized extreme value distribution: A case study in Malaysia

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Salam, Norfatin; Kassim, Suraiya

    2013-04-01

    Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.

  14. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    NASA Astrophysics Data System (ADS)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  15. Comparative evaluation of a new lactation curve model for pasture-based Holstein-Friesian dairy cows.

    PubMed

    Adediran, S A; Ratkowsky, D A; Donaghy, D J; Malau-Aduli, A E O

    2012-09-01

    Fourteen lactation models were fitted to average and individual cow lactation data from pasture-based dairy systems in the Australian states of Victoria and Tasmania. The models included a new "log-quadratic" model, and a major objective was to evaluate and compare the performance of this model with the other models. Nine empirical and 5 mechanistic models were first fitted to average test-day milk yield of Holstein-Friesian dairy cows using the nonlinear procedure in SAS. Two additional semiparametric models were fitted using a linear model in ASReml. To investigate the influence of days to first test-day and the number of test-days, 5 of the best-fitting models were then fitted to individual cow lactation data. Model goodness of fit was evaluated using criteria such as the residual mean square, the distribution of residuals, the correlation between actual and predicted values, and the Wald-Wolfowitz runs test. Goodness of fit was similar in all but one of the models in terms of fitting average lactation but they differed in their ability to predict individual lactations. In particular, the widely used incomplete gamma model most displayed this failing. The new log-quadratic model was robust in fitting average and individual lactations, and was less affected by sampled data and more parsimonious in having only 3 parameters, each of which lends itself to biological interpretation. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Geographic variation in survival and migratory tendency among North American Common Mergansers

    USGS Publications Warehouse

    Pearce, J.M.; Reed, J.A.; Flint, Paul L.

    2005-01-01

    Movement ecology and demographic parameters for the Common Merganser (Mergus merganser americanus) in North America are poorly known. We used band-recovery data from five locations across North America spanning the years 1938-1998 to examine migratory patterns and estimate survival rates. We examined competing time-invariant, age-graduated models with program MARK to study sources of variation in survival and reporting probability. We considered age, sex, geographic location, and the use of nasal saddles on hatching year birds at one location as possible sources of variation. Year-of-banding was included as a covariate in a post-hoc analysis. We found that migratory tendency, defined as the average distance between banding and recovery locations, varied geographically. Similarly, all models accounting for the majority of variation in recovery and survival probabilities included location of banding. Models that included age and sex received less support, but we lacked sufficient data to adequately assess these parameters. Model-averaged estimates of annual survival ranged from 0.21 in Michigan to 0.82 in Oklahoma. Heterogeneity in migration tendency and survival suggests that demographic patterns may vary across geographic scales, with implications for the population dynamics of this species.

  17. Time series analysis of collective motions in proteins

    NASA Astrophysics Data System (ADS)

    Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.

    2004-01-01

    The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.

  18. Increase in winter haze over eastern China in recent decades: Roles of variations in meteorological parameters and anthropogenic emissions: INCREASE IN WINTER HAZE IN EASTERN CHINA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Liao, Hong; Lou, Sijia

    The increase in winter haze over eastern China in recent decades due to variations in meteorological parameters and anthropogenic emissions was quantified using observed atmospheric visibility from the National Climatic Data Center Global Summary of Day database for 1980–2014 and simulated PM2.5 concentrations for 1985–2005 from the Goddard Earth Observing System (GEOS) chemical transport model (GEOS-Chem). Observed winter haze days averaged over eastern China (105–122.5°E, 20–45°N) increased from 21 d in 1980 to 42 d in 2014, and from 22 to 30 d between 1985 and 2005. The GEOS-Chem model captured the increasing trend of winter PM2.5 concentrations for 1985–2005,more » with concentrations averaged over eastern China increasing from 16.1 μg m -3 in 1985 to 38.4 μg m -3 in 2005. Considering variations in both anthropogenic emissions and meteorological parameters, the model simulated an increase in winter surface-layer PM2.5 concentrations of 10.5 (±6.2) μg m -3 decade -1 over eastern China. The increasing trend was only 1.8 (±1.5) μg m -3 decade -1 when variations in meteorological parameters alone were considered. Among the meteorological parameters, the weakening of winds by -0.09 m s -1 decade -1 over 1985–2005 was found to be the dominant factor leading to the decadal increase in winter aerosol concentrations and haze days over eastern China during recent decades.« less

  19. Parameter estimation in a human operator describing function model for a two-dimensional tracking task

    NASA Technical Reports Server (NTRS)

    Vanlunteren, A.

    1977-01-01

    A previously described parameter estimation program was applied to a number of control tasks, each involving a human operator model consisting of more than one describing function. One of these experiments is treated in more detail. It consisted of a two dimensional tracking task with identical controlled elements. The tracking errors were presented on one display as two vertically moving horizontal lines. Each loop had its own manipulator. The two forcing functions were mutually independent and consisted each of 9 sine waves. A human operator model was chosen consisting of 4 describing functions, thus taking into account possible linear cross couplings. From the Fourier coefficients of the relevant signals the model parameters were estimated after alignment, averaging over a number of runs and decoupling. The results show that for the elements in the main loops the crossover model applies. A weak linear cross coupling existed with the same dynamics as the elements in the main loops but with a negative sign.

  20. Uncertainty Quantification of GEOS-5 L-band Radiative Transfer Model Parameters Using Bayesian Inference and SMOS Observations

    NASA Technical Reports Server (NTRS)

    DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.

    2013-01-01

    Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).

  1. Computational modeling of cardiovascular response to orthostatic stress

    NASA Technical Reports Server (NTRS)

    Heldt, Thomas; Shim, Eun B.; Kamm, Roger D.; Mark, Roger G.

    2002-01-01

    The objective of this study is to develop a model of the cardiovascular system capable of simulating the short-term (< or = 5 min) transient and steady-state hemodynamic responses to head-up tilt and lower body negative pressure. The model consists of a closed-loop lumped-parameter representation of the circulation connected to set-point models of the arterial and cardiopulmonary baroreflexes. Model parameters are largely based on literature values. Model verification was performed by comparing the simulation output under baseline conditions and at different levels of orthostatic stress to sets of population-averaged hemodynamic data reported in the literature. On the basis of experimental evidence, we adjusted some model parameters to simulate experimental data. Orthostatic stress simulations are not statistically different from experimental data (two-sided test of significance with Bonferroni adjustment for multiple comparisons). Transient response characteristics of heart rate to tilt also compare well with reported data. A case study is presented on how the model is intended to be used in the future to investigate the effects of post-spaceflight orthostatic intolerance.

  2. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  3. A phenomenological continuum model for force-driven nano-channel liquid flows

    NASA Astrophysics Data System (ADS)

    Ghorbanian, Jafar; Celebi, Alper T.; Beskok, Ali

    2016-11-01

    A phenomenological continuum model is developed using systematic molecular dynamics (MD) simulations of force-driven liquid argon flows confined in gold nano-channels at a fixed thermodynamic state. Well known density layering near the walls leads to the definition of an effective channel height and a density deficit parameter. While the former defines the slip-plane, the latter parameter relates channel averaged density with the desired thermodynamic state value. Definitions of these new parameters require a single MD simulation performed for a specific liquid-solid pair at the desired thermodynamic state and used for calibration of model parameters. Combined with our observations of constant slip-length and kinematic viscosity, the model accurately predicts the velocity distribution and volumetric and mass flow rates for force-driven liquid flows in different height nano-channels. Model is verified for liquid argon flow at distinct thermodynamic states and using various argon-gold interaction strengths. Further verification is performed for water flow in silica and gold nano-channels, exhibiting slip lengths of 1.2 nm and 15.5 nm, respectively. Excellent agreements between the model and the MD simulations are reported for channel heights as small as 3 nm for various liquid-solid pairs.

  4. Perspectives on continuum flow models for force-driven nano-channel liquid flows

    NASA Astrophysics Data System (ADS)

    Beskok, Ali; Ghorbanian, Jafar; Celebi, Alper

    2017-11-01

    A phenomenological continuum model is developed using systematic molecular dynamics (MD) simulations of force-driven liquid argon flows confined in gold nano-channels at a fixed thermodynamic state. Well known density layering near the walls leads to the definition of an effective channel height and a density deficit parameter. While the former defines the slip-plane, the latter parameter relates channel averaged density with the desired thermodynamic state value. Definitions of these new parameters require a single MD simulation performed for a specific liquid-solid pair at the desired thermodynamic state and used for calibration of model parameters. Combined with our observations of constant slip-length and kinematic viscosity, the model accurately predicts the velocity distribution and volumetric and mass flow rates for force-driven liquid flows in different height nano-channels. Model is verified for liquid argon flow at distinct thermodynamic states and using various argon-gold interaction strengths. Further verification is performed for water flow in silica and gold nano-channels, exhibiting slip lengths of 1.2 nm and 15.5 nm, respectively. Excellent agreements between the model and the MD simulations are reported for channel heights as small as 3 nm for various liquid-solid pairs.

  5. Deriving movement properties and the effect of the environment from the Brownian bridge movement model in monkeys and birds.

    PubMed

    Buchin, Kevin; Sijben, Stef; van Loon, E Emiel; Sapir, Nir; Mercier, Stéphanie; Marie Arseneau, T Jean; Willems, Erik P

    2015-01-01

    The Brownian bridge movement model (BBMM) provides a biologically sound approximation of the movement path of an animal based on discrete location data, and is a powerful method to quantify utilization distributions. Computing the utilization distribution based on the BBMM while calculating movement parameters directly from the location data, may result in inconsistent and misleading results. We show how the BBMM can be extended to also calculate derived movement parameters. Furthermore we demonstrate how to integrate environmental context into a BBMM-based analysis. We develop a computational framework to analyze animal movement based on the BBMM. In particular, we demonstrate how a derived movement parameter (relative speed) and its spatial distribution can be calculated in the BBMM. We show how to integrate our framework with the conceptual framework of the movement ecology paradigm in two related but acutely different ways, focusing on the influence that the environment has on animal movement. First, we demonstrate an a posteriori approach, in which the spatial distribution of average relative movement speed as obtained from a "contextually naïve" model is related to the local vegetation structure within the monthly ranging area of a group of wild vervet monkeys. Without a model like the BBMM it would not be possible to estimate such a spatial distribution of a parameter in a sound way. Second, we introduce an a priori approach in which atmospheric information is used to calculate a crucial parameter of the BBMM to investigate flight properties of migrating bee-eaters. This analysis shows significant differences in the characteristics of flight modes, which would have not been detected without using the BBMM. Our algorithm is the first of its kind to allow BBMM-based computation of movement parameters beyond the utilization distribution, and we present two case studies that demonstrate two fundamentally different ways in which our algorithm can be applied to estimate the spatial distribution of average relative movement speed, while interpreting it in a biologically meaningful manner, across a wide range of environmental scenarios and ecological contexts. Therefore movement parameters derived from the BBMM can provide a powerful method for movement ecology research.

  6. Contrasts between source parameters of M [>=] 5. 5 earthquakes in northern Baja California and southern California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doser, D.I.

    1993-04-01

    Source parameters determined from the body waveform modeling of large (M [>=] 5.5) historic earthquakes occurring between 1915 and 1956 along the San Jacinto and Imperial fault zones of southern California and the Cerro Prieto, Tres Hermanas and San Miguel fault zones of Baja California have been combined with information from post-1960's events to study regional variations in source parameters. The results suggest that large earthquakes along the relatively young San Miguel and Tres Hermanas fault zones have complex rupture histories, small source dimensions (< 25 km), high stress drops (60 bar average), and a high incidence of foreshock activity.more » This may be a reflection of the rough, highly segmented nature of the young faults. In contrast, Imperial-Cerro Prieto events of similar magnitude have low stress drops (16 bar average) and longer rupture lengths (42 km average), reflecting rupture along older, smoother fault planes. Events along the San Jacinto fault zone appear to lie in between these two groups. These results suggest a relationship between the structural and seismological properties of strike-slip faults that should be considered during seismic risk studies.« less

  7. Estimation of the Reactive Flow Model Parameters for an Ammonium Nitrate-Based Emulsion Explosive Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ribeiro, J. B.; Silva, C.; Mendes, R.

    2010-10-01

    A real coded genetic algorithm methodology that has been developed for the estimation of the parameters of the reaction rate equation of the Lee-Tarver reactive flow model is described in detail. This methodology allows, in a single optimization procedure, using only one experimental result and, without the need of any starting solution, to seek the 15 parameters of the reaction rate equation that fit the numerical to the experimental results. Mass averaging and the plate-gap model have been used for the determination of the shock data used in the unreacted explosive JWL equation of state (EOS) assessment and the thermochemical code THOR retrieved the data used in the detonation products' JWL EOS assessments. The developed methodology was applied for the estimation of the referred parameters for an ammonium nitrate-based emulsion explosive using poly(methyl methacrylate) (PMMA)-embedded manganin gauge pressure-time data. The obtained parameters allow a reasonably good description of the experimental data and show some peculiarities arising from the intrinsic nature of this kind of composite explosive.

  8. Forcing Regression through a Given Point Using Any Familiar Computational Routine.

    DTIC Science & Technology

    1983-03-01

    a linear model , Y =a + OX + e ( Model I) then adopt the principle of least squares; and use sample data to estimate the unknown parameters, a and 8...has an expected value of zero indicates that the "average" response is considered linear . If c varies widely, Model I, though conceptually correct, may...relationship is linear from the maximum observed x to x - a, then Model II should be used. To pro- ceed with the customary evaluation of Model I would be

  9. A rapid radiative transfer model for reflection of solar radiation

    NASA Technical Reports Server (NTRS)

    Xiang, X.; Smith, E. A.; Justus, C. G.

    1994-01-01

    A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.

  10. Item response theory analysis of the Utrecht Work Engagement Scale for Students (UWES-S) using a sample of Japanese university and college students majoring medical science, nursing, and natural science.

    PubMed

    Tsubakita, Takashi; Shimazaki, Kazuyo; Ito, Hiroshi; Kawazoe, Nobuo

    2017-10-30

    The Utrecht Work Engagement Scale for Students has been used internationally to assess students' academic engagement, but it has not been analyzed via item response theory. The purpose of this study was to conduct an item response theory analysis of the Japanese version of the Utrecht Work Engagement Scale for Students translated by authors. Using a two-parameter model and Samejima's graded response model, difficulty and discrimination parameters were estimated after confirming the factor structure of the scale. The 14 items on the scale were analyzed with a sample of 3214 university and college students majoring medical science, nursing, or natural science in Japan. The preliminary parameter estimation was conducted with the two parameter model, and indicated that three items should be removed because there were outlier parameters. Final parameter estimation was conducted using the survived 11 items, and indicated that all difficulty and discrimination parameters were acceptable. The test information curve suggested that the scale better assesses higher engagement than average engagement. The estimated parameters provide a basis for future comparative studies. The results also suggested that a 7-point Likert scale is too broad; thus, the scaling should be modified to fewer graded scaling structure.

  11. Assessing soil erosion using USLE model and MODIS data in the Guangdong, China

    NASA Astrophysics Data System (ADS)

    Gao, Feng; Wang, Yunpeng; Yang, Jingxue

    2017-07-01

    In this study, soil erosion in the Guangdong, China during 2012 was quantitatively assessed using Universal Soil Loss Equation (USLE). The parameters of the model were calculated using GIS and MODIS data. The spatial distribution of the average annual soil loss on grid basis was mapped. The estimated average annual soil erosion in Guangdong in 2012 is about 2294.47t/ (km2.a). Four high sensitive area of soil erosion in Guangdong in 2012 was found. The key factors of these four high sensitive areas of soil erosion were significantly contributed to the land cover types, rainfall and Economic development and human activities.

  12. Use of ssq rotational invariant of magnetotelluric impedances for estimating informative properties for galvanic distortion

    NASA Astrophysics Data System (ADS)

    Rung-Arunwan, T.; Siripunvaraporn, W.; Utada, H.

    2017-06-01

    Several useful properties and parameters—a model of the regional mean one-dimensional (1D) conductivity profile, local and regional distortion indicators, and apparent gains—were defined in our recent paper using two rotational invariants (det: determinant and ssq: sum of squared elements) from a set of magnetotelluric (MT) data obtained by an array of observation sites. In this paper, we demonstrate their characteristics and benefits through synthetic examples using 1D and three-dimensional (3D) models. First, a model of the regional mean 1D conductivity profile is obtained using the average ssq impedance with different levels of galvanic distortion. In contrast to the Berdichevsky average using the average det impedance, the average ssq impedance is shown to yield a reliable estimate of the model of the regional mean 1D conductivity profile, even when severe galvanic distortion is contained in the data. Second, the local and regional distortion indicators were found to indicate the galvanic distortion as expressed by the splitting and shear parameters and to quantify their strengths in individual MT data and in the dataset as a whole. Third, the apparent gain was also shown to be a good approximation of the site gain, which is generally claimed to be undeterminable without external information. The model of the regional mean 1D profile could be used as an initial or a priori model in higher-dimensional inversions. The local and regional distortion indicators and apparent gains could be used to examine the existence and to guess the strength of the galvanic distortion. Although these conclusions were derived from synthetic tests using the Groom-Bailey distortion model, additional tests with different distortion models indicated that these conclusions are not strongly dependent on the choice of distortion model. These galvanic-distortion-related parameters would also assist in judging if a proper treatment is needed for the galvanic distortion when an MT dataset is given. Hence, this information derived from the dataset would be useful in MT data analysis and inversion.

  13. The Predicted Influence of Climate Change on Lesser Prairie-Chicken Reproductive Parameters

    PubMed Central

    Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, Dawn M.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.

    2013-01-01

    The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001–2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter’s linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival. PMID:23874549

  14. The effects of intraspecific competition and stabilizing selection on a polygenic trait.

    PubMed Central

    Bürger, Reinhard; Gimelfarb, Alexander

    2004-01-01

    The equilibrium properties of an additive multilocus model of a quantitative trait under frequency- and density-dependent selection are investigated. Two opposing evolutionary forces are assumed to act: (i) stabilizing selection on the trait, which favors genotypes with an intermediate phenotype, and (ii) intraspecific competition mediated by that trait, which favors genotypes whose effect on the trait deviates most from that of the prevailing genotypes. Accordingly, fitnesses of genotypes have a frequency-independent component describing stabilizing selection and a frequency- and density-dependent component modeling competition. We study how the equilibrium structure, in particular, number, degree of polymorphism, and genetic variance of stable equilibria, is affected by the strength of frequency dependence, and what role the number of loci, the amount of recombination, and the demographic parameters play. To this end, we employ a statistical and numerical approach, complemented by analytical results, and explore how the equilibrium properties averaged over a large number of genetic systems with a given number of loci and average amount of recombination depend on the ecological and demographic parameters. We identify two parameter regions with a transitory region in between, in which the equilibrium properties of genetic systems are distinctively different. These regions depend on the strength of frequency dependence relative to pure stabilizing selection and on the demographic parameters, but not on the number of loci or the amount of recombination. We further study the shape of the fitness function observed at equilibrium and the extent to which the dynamics in this model are adaptive, and we present examples of equilibrium distributions of genotypic values under strong frequency dependence. Consequences for the maintenance of genetic variation, the detection of disruptive selection, and models of sympatric speciation are discussed. PMID:15280253

  15. Model parameters for representative wetland plant functional groups

    USGS Publications Warehouse

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in adjacent areas as they affect wetlands.

  16. Ecological optimality in water-limited natural soil-vegetation systems. I - Theory and hypothesis

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.

    1982-01-01

    The solution space of an approximate statistical-dynamic model of the average annual water balance is explored with respect to the hydrologic parameters of both soil and vegetation. Within the accuracy of this model it is shown that water-limited natural vegetation systems are in stable equilibrium with their climatic and pedologic environments when the canopy density and species act to minimize average water demand stress. Theory shows a climatic limit to this equilibrium above which it is hypothesized that ecological pressure is toward maximization of biomass productivity. It is further hypothesized that natural soil-vegetation systems will develop gradually and synergistically, through vegetation-induced changes in soil structure, toward a set of hydraulic soil properties for which the minimum stress canopy density of a given species is maximum in a given climate. Using these hypotheses, only the soil effective porosity need be known to determine the optimum soil and vegetation parameters in a given climate.

  17. Distributed modelling of hydrologic regime at three subcatchments of Kopaninský tok catchment

    NASA Astrophysics Data System (ADS)

    Žlábek, Pavel; Tachecí, Pavel; Kaplická, Markéta; Bystřický, Václav

    2010-05-01

    Kopaninský tok catchment is situated in crystalline area of Bohemo-Moravian highland hilly region, with cambisol cover and prevailing agricultural land use. It is a subject of long term (since 1980's) observation. Time series (discharge, precipitation, climatic parameters...) are nowadays available in 10 min. time step, water quality average daily composit samples plus samples during events are available. Soil survey resulting in reference soil hydraulic properties for horizons and vegetation cover survey incl. LAI measurement has been done. All parameters were analysed and used for establishing of distributed mathematical models of P6, P52 and P53 subcatchments, using MIKE SHE 2009 WM deterministic hydrologic modelling system. The aim is to simulate long-term hydrologic regime as well as rainfall-runoff events, serving the base for modelling of nitrate regime and agricultural management influence in the next step. Mentioned subcatchments differs in ratio of artificial drainage area, soil types, land use and slope angle. The models are set-up in a regular computational grid of 2 m size. Basic time step was set to 2 hrs, total simulated period covers 3 years. Runoff response and moisture regime is compared using spatially distributed simulation results. Sensitivity analysis revealed most important parameters influencing model response. Importance of spatial distribution of initial conditions was underlined. Further on, different runoff components in terms of their origin, flow paths and travel time were separated using a combination of two runoff separation techniques (a digital filter and a simple conceptual model GROUND) in 12 subcatchments of Kopaninský tok catchment. These two methods were chosen based on a number of methods testing. Ordinations diagrams performed with Canoco software were used to evaluate influence of different catchment parameters on different runoff components. A canonical ordination method analyses (RDA) was used to explain one data set (runoff components - either volumes of each runoff component or occurence of baseflow) with another data set (catchment parameters - proportion of arable land, proportion of forest, proportion of vulnerable zones with high infiltration capacity, average slope, topographic index and runoff coefficient). The influence was analysed both for long-term runoff balance and selected rainfall-runoff events. Keywords: small catchment, water balance modelling, rainfall-runoff modelling, distributed deterministic model, runoff separation, sensitivity analysis

  18. Generation of random microstructures and prediction of sound velocity and absorption for open foams with spherical pores.

    PubMed

    Zieliński, Tomasz G

    2015-04-01

    This paper proposes and discusses an approach for the design and quality inspection of the morphology dedicated for sound absorbing foams, using a relatively simple technique for a random generation of periodic microstructures representative for open-cell foams with spherical pores. The design is controlled by a few parameters, namely, the total open porosity and the average pore size, as well as the standard deviation of pore size. These design parameters are set up exactly and independently, however, the setting of the standard deviation of pore sizes requires some number of pores in the representative volume element (RVE); this number is a procedure parameter. Another pore structure parameter which may be indirectly affected is the average size of windows linking the pores, however, it is in fact weakly controlled by the maximal pore-penetration factor, and moreover, it depends on the porosity and pore size. The proposed methodology for testing microstructure-designs of sound absorbing porous media applies the multi-scale modeling where some important transport parameters-responsible for sound propagation in a porous medium-are calculated from microstructure using the generated RVE, in order to estimate the sound velocity and absorption of such a designed material.

  19. Three-dimensional whole-brain perfusion quantification using pseudo-continuous arterial spin labeling MRI at multiple post-labeling delays: accounting for both arterial transit time and impulse response function.

    PubMed

    Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M

    2014-02-01

    Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Evaluation, Calibration and Comparison of the Precipitation-Runoff Modeling System (PRMS) National Hydrologic Model (NHM) Using Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) Gridded Datasets

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II; Haj, A. E., Jr.

    2014-12-01

    The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.

  1. Heat transfer characteristics within an array of impinging jets. Effects of crossflow temperature relative to jet temperature

    NASA Technical Reports Server (NTRS)

    Florschuetz, L. W.; Su, C. C.

    1985-01-01

    Spanwise average heat fluxes, resolved in the streamwise direction to one stream-wise hole spacing were measured for two-dimensional arrays of circular air jets impinging on a heat transfer surface parallel to the jet orifice plate. The jet flow, after impingement, was constrained to exit in a single direction along the channel formed by the jet orifice plate and heat transfer surface. The crossflow originated from the jets following impingement and an initial crossflow was present that approached the array through an upstream extension of the channel. The regional average heat fluxes are considered as a function of parameters associated with corresponding individual spanwise rows within the array. A linear superposition model was employed to formulate appropriate governing parameters for the individual row domain. The effects of flow history upstream of an individual row domain are also considered. The results are formulated in terms of individual spanwise row parameters. A corresponding set of streamwise resolved heat transfer characteristics formulated in terms of flow and geometric parameters characterizing the overall arrays is described.

  2. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.

    PubMed

    Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E

    2011-01-01

    Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. Exploring L1 model space in search of conductivity bounds for the MT problem

    NASA Astrophysics Data System (ADS)

    Wheelock, B. D.; Parker, R. L.

    2013-12-01

    Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.

  4. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  5. A size-structured model of bacterial growth and reproduction.

    PubMed

    Ellermeyer, S F; Pilyugin, S S

    2012-01-01

    We consider a size-structured bacterial population model in which the rate of cell growth is both size- and time-dependent and the average per capita reproduction rate is specified as a model parameter. It is shown that the model admits classical solutions. The population-level and distribution-level behaviours of these solutions are then determined in terms of the model parameters. The distribution-level behaviour is found to be different from that found in similar models of bacterial population dynamics. Rather than convergence to a stable size distribution, we find that size distributions repeat in cycles. This phenomenon is observed in similar models only under special assumptions on the functional form of the size-dependent growth rate factor. Our main results are illustrated with examples, and we also provide an introductory study of the bacterial growth in a chemostat within the framework of our model.

  6. Near Real-Time Event Detection & Prediction Using Intelligent Software Agents

    DTIC Science & Technology

    2006-03-01

    value was 0.06743. Multiple autoregressive integrated moving average ( ARIMA ) models were then build to see if the raw data, differenced data, or...slight improvement. The best adjusted r^2 value was found to be 0.1814. Successful results were not expected from linear or ARIMA -based modelling ...appear, 2005. [63] Mora-Lopez, L., Mora, J., Morales-Bueno, R., et al. Modelling time series of climatic parameters with probabilistic finite

  7. Analyzing ROC curves using the effective set-size model

    NASA Astrophysics Data System (ADS)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical imaging tasks.

  8. Documentation of a groundwater flow model developed to assess groundwater availability in the Northern Atlantic Coastal Plain aquifer system from Long Island, New York, to North Carolina

    USGS Publications Warehouse

    Masterson, John P.; Pope, Jason P.; Fienen, Michael N.; Monti, Jr., Jack; Nardi, Mark R.; Finkelstein, Jason S.

    2016-08-31

    The U.S. Geological Survey developed a groundwater flow model for the Northern Atlantic Coastal Plain aquifer system from Long Island, New York, to northeastern North Carolina as part of a detailed assessment of the groundwater availability of the area and included an evaluation of how these resources have changed over time from stresses related to human uses and climate trends. The assessment was necessary because of the substantial dependency on groundwater for agricultural, industrial, and municipal needs in this area.The three-dimensional, groundwater flow model developed for this investigation used the numerical code MODFLOW–NWT to represent changes in groundwater pumping and aquifer recharge from predevelopment (before 1900) to future conditions, from 1900 to 2058. The model was constructed using existing hydrogeologic and geospatial information to represent the aquifer system geometry, boundaries, and hydraulic properties of the 19 separate regional aquifers and confining units within the Northern Atlantic Coastal Plain aquifer system and was calibrated using an inverse modeling parameter-estimation (PEST) technique.The parameter estimation process was achieved through history matching, using observations of heads and flows for both steady-state and transient conditions. A total of 8,868 annual water-level observations from 644 wells from 1986 to 2008 were combined into 29 water-level observation groups that were chosen to focus the history matching on specific hydrogeologic units in geographic areas in which distinct geologic and hydrologic conditions were observed. In addition to absolute water-level elevations, the water-level differences between individual measurements were also included in the parameter estimation process to remove the systematic bias caused by missing hydrologic stresses prior to 1986. The total average residual of –1.7 feet was normally distributed for all head groups, indicating minimal bias. The average absolute residual value of 12.3 feet is about 3 percent of the total observed water-level range throughout the aquifer system.Streamflow observation data of base flow conditions were derived for 153 sites from the U.S. Geological Survey National Hydrography Dataset Plus and National Water Information System. An average residual of about –8 cubic feet per second and an average absolute residual of about 21 cubic feet per second for a range of computed base flows of about 417 cubic feet per second were calculated for the 122 sites from the National Hydrography Dataset Plus. An average residual of about 10 cubic feet per second and an average absolute residual of about 34 cubic feet per second were calculated for the 568 flow measurements in the 31 sites obtained from the National Water Information System for a range in computed base flows of about 1,141 cubic feet per second.The numerical representation of the hydrogeologic information used in the development of this regional flow model was dependent upon how the aquifer system and simulated hydrologic stresses were discretized in space and time. Lumping hydraulic parameters in space and hydrologic stresses and time-varying observational data in time can limit the capabilities of this tool to simulate how the groundwater flow system responds to changes in hydrologic stresses, particularly at the local scale.

  9. An IUR evolutionary game model on the patent cooperate of Shandong China

    NASA Astrophysics Data System (ADS)

    Liu, Mengmeng; Ma, Yinghong; Liu, Zhiyuan; You, Xuemei

    2017-06-01

    Organizations of industries and university & research institutes cooperate to meet their respective needs based on social contacts, trust and share complementary resources. From the perspective of complex network together with the patent data of Shandong province in China, a novel evolutionary game model on patent cooperation network is presented. Two sides in the game model are industries and universities & research institutes respectively. The cooperation is represented by a connection when a new patent is developed together by the two sides. The optimal strategy of the evolutionary game model is quantified by the average positive cooperation probability p ¯ and the average payoff U ¯ . The feasibility of this game model is simulated on the parameters such as the knowledge spillover, the punishment, the development cost and the distribution coefficient of the benefit. The numerical simulations show that the cooperative behaviors are affected by the variation of parameters. The knowledge spillover displays different behaviors when the punishment is larger than the development cost or less than it. Those results indicate that reasonable punishment would improve the positive cooperation. The appropriate punishment will be useful to enhance the big degree nodes positively cooperate with industries and universities & research institutes. And an equitable plan for the distribution of cooperative profits is half-and-half distribution strategy for the two sides in game.

  10. Expansion and growth of structure observables in a macroscopic gravity averaged universe

    NASA Astrophysics Data System (ADS)

    Wijenayake, Tharake; Ishak, Mustapha

    2015-03-01

    We investigate the effect of averaging inhomogeneities on expansion and large-scale structure growth observables using the exact and covariant framework of macroscopic gravity (MG). It is well known that applying the Einstein's equations and spatial averaging do not commute and lead to the averaging problem and backreaction terms. For the MG formalism applied to the Friedman-Lemaitre-Robertson-Walker (FLRW) metric, the extra term can be encapsulated as an averaging density parameter denoted ΩA . An exact isotropic cosmological solution of MG for the flat FLRW metric is already known in the literature; we derive here an anisotropic exact solution. Using the isotropic solution, we compare the expansion history to current available data of distances to supernovae, baryon acoustic oscillations, cosmic microwave background last scattering surface data, and Hubble constant measurements, and find -0.05 ≤ΩA≤0.07 (at the 95% confidence level). For the flat metric case this reduces to -0.03 ≤ΩA≤0.05 . The positive part of the intervals can be rejected if a mathematical (and physical) prior is taken into account. We also find that the inclusion of this term in the fits can shift the values of the usual cosmological parameters by a few to several percents. Next, we derive an equation for the growth rate of large-scale structure in MG that includes a term due to the averaging and assess its effect on the evolution of the growth compared to that of the Lambda cold dark matter (Λ CDM ) concordance model. We find that an ΩA term of an amplitude range of [-0.04 ,-0.02 ] lead to a relative deviation of the growth from that of the Λ CDM of up to 2%-4% at late times. Thus, the shift in the growth could be of comparable amplitude to that caused by similar changes in cosmological parameters like the dark energy density parameter or its equation of state. The effect could also be comparable in amplitude to some systematic effects considered for future surveys. This indicates that the averaging term and its possible effect need to be tightly constrained in future precision cosmological studies.

  11. Properties of radar backscatter of forests measured with a multifrequency polarimetric SAR

    NASA Technical Reports Server (NTRS)

    Amar, F.; Karam, M. A.; Fung, A. K.; De Grandi, G.; Lavalle, C.; Sieber, A.

    1992-01-01

    Fully polarimetric airborne synthetic aperture radar (AIRSAR) data, collected in Germany during the MAC Europe campaign, are calibrated using software packages developed at the Joint Research Center (JRC) in Italy for both L- and C-bands. During the period of the overflight dates, extensive ground truth was collected in order to describe the physical and statistical parameters of the canopy, the understory, and the soil. These parameters are compiled and converted into electromagnetic parameters suitable for input to the new polarimetric three-layer canopy model developed at the Wave Scattering Research Center (WSRC) at the University of Texas at Arlington. Comparisons between the theoretical predictions from the model and the calibrated data are carried out. Initial results reveal that the trend of the average phase difference can be predicted by the model, and that the backscattering ratio *shh/ svv is sensitive to the distribution of the primary branches.

  12. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.

  13. Understanding the past to interpret the future: Comparison of simulated groundwater recharge in the upper Colorado River basin (USA) using observed and general-circulation-model historical climate data

    USGS Publications Warehouse

    Tillman, Fred D.; Gangopadhyay, Subhrendu; Pruitt, Tom

    2017-01-01

    In evaluating potential impacts of climate change on water resources, water managers seek to understand how future conditions may differ from the recent past. Studies of climate impacts on groundwater recharge often compare simulated recharge from future and historical time periods on an average monthly or overall average annual basis, or compare average recharge from future decades to that from a single recent decade. Baseline historical recharge estimates, which are compared with future conditions, are often from simulations using observed historical climate data. Comparison of average monthly results, average annual results, or even averaging over selected historical decades, may mask the true variability in historical results and lead to misinterpretation of future conditions. Comparison of future recharge results simulated using general circulation model (GCM) climate data to recharge results simulated using actual historical climate data may also result in an incomplete understanding of the likelihood of future changes. In this study, groundwater recharge is estimated in the upper Colorado River basin, USA, using a distributed-parameter soil-water balance groundwater recharge model for the period 1951–2010. Recharge simulations are performed using precipitation, maximum temperature, and minimum temperature data from observed climate data and from 97 CMIP5 (Coupled Model Intercomparison Project, phase 5) projections. Results indicate that average monthly and average annual simulated recharge are similar using observed and GCM climate data. However, 10-year moving-average recharge results show substantial differences between observed and simulated climate data, particularly during period 1970–2000, with much greater variability seen for results using observed climate data.

  14. Photospheres of hot stars. IV - Spectral type O4

    NASA Technical Reports Server (NTRS)

    Bohannan, Bruce; Abbott, David C.; Voels, Stephen A.; Hummer, David G.

    1990-01-01

    The basic stellar parameters of a supergiant (Zeta Pup) and two main-sequence stars, 9 Sgr and HD 46223, at spectral class O4 are determined using line profile analysis. The stellar parameters are determined by comparing high signal-to-noise hydrogen and helium line profiles with those from stellar atmosphere models which include the effect of radiation scattered back onto the photosphere from an overlying stellar wind, an effect referred to as wind blanketing. At spectral class O4, the inclusion of wind-blanketing in the model atmosphere reduces the effective temperature by an average of 10 percent. This shift in effective temperature is also reflected by shifts in several other stellar parameters relative to previous O4 spectral-type calibrations. It is also shown through the analysis of the two O4 V stars that scatter in spectral type calibrations is introduced by assuming that the observed line profile reflects the photospheric stellar parameters.

  15. Phenomenological Constitutive Modeling of High-Temperature Flow Behavior Incorporating Individual and Coupled Effects of Processing Parameters in Super-austenitic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Roy, Swagata; Biswas, Srija; Babu, K. Arun; Mandal, Sumantra

    2018-05-01

    A novel constitutive model has been developed for predicting flow responses of super-austenitic stainless steel over a wide range of strains (0.05-0.6), temperatures (1173-1423 K) and strain rates (0.001-1 s-1). Further, the predictability of this new model has been compared with the existing Johnson-Cook (JC) and modified Zerilli-Armstrong (M-ZA) model. The JC model is not befitted for flow prediction as it is found to be exhibiting very high ( 36%) average absolute error (δ) and low ( 0.92) correlation coefficient (R). On the contrary, the M-ZA model has demonstrated relatively lower δ ( 13%) and higher R ( 0.96) for flow prediction. The incorporation of couplings of processing parameters in M-ZA model has led to exhibit better prediction than JC model. However, the flow analyses of the studied alloy have revealed the additional synergistic influences of strain and strain rate as well as strain, temperature, and strain rate apart from those considered in M-ZA model. Hence, the new phenomenological model has been formulated incorporating all the individual and synergistic effects of processing parameters and a `strain-shifting' parameter. The proposed model predicted the flow behavior of the alloy with much better correlation and generalization than M-ZA model as substantiated by its lower δ ( 7.9%) and higher R ( 0.99) of prediction.

  16. Time series modelling of increased soil temperature anomalies during long period

    NASA Astrophysics Data System (ADS)

    Shirvani, Amin; Moradi, Farzad; Moosavi, Ali Akbar

    2015-10-01

    Soil temperature just beneath the soil surface is highly dynamic and has a direct impact on plant seed germination and is probably the most distinct and recognisable factor governing emergence. Autoregressive integrated moving average as a stochastic model was developed to predict the weekly soil temperature anomalies at 10 cm depth, one of the most important soil parameters. The weekly soil temperature anomalies for the periods of January1986-December 2011 and January 2012-December 2013 were taken into consideration to construct and test autoregressive integrated moving average models. The proposed model autoregressive integrated moving average (2,1,1) had a minimum value of Akaike information criterion and its estimated coefficients were different from zero at 5% significance level. The prediction of the weekly soil temperature anomalies during the test period using this proposed model indicated a high correlation coefficient between the observed and predicted data - that was 0.99 for lead time 1 week. Linear trend analysis indicated that the soil temperature anomalies warmed up significantly by 1.8°C during the period of 1986-2011.

  17. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    NASA Astrophysics Data System (ADS)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  18. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  19. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    PubMed

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  20. Production model in the conditions of unstable demand taking into account the influence of trading infrastructure: Ergodicity and its application

    NASA Astrophysics Data System (ADS)

    Obrosova, N. K.; Shananin, A. A.

    2015-04-01

    A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by an attempt to analyze the problems of functioning of low competitive macroeconomic structures. The model is formalized in the form of a Bellman equation, for which a closed-form solution is found. The stochastic process of product stock variations is proved to be ergodic and its final probability distribution is found. Expressions for the average production load and the average product stock are found by analyzing the stochastic process. A system of model equations relating the model variables to official statistical parameters is derived. The model is identified using data from the Fiat and KAMAZ companies. The influence of the credit interest rate on the firm market value assessment and the production load level are analyzed using comparative statics methods.

  1. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using

  2. Regression and multivariate models for predicting particulate matter concentration level.

    PubMed

    Nazif, Amina; Mohammed, Nurul Izma; Malakahmad, Amirhossein; Abualqumboz, Motasem S

    2018-01-01

    The devastating health effects of particulate matter (PM 10 ) exposure by susceptible populace has made it necessary to evaluate PM 10 pollution. Meteorological parameters and seasonal variation increases PM 10 concentration levels, especially in areas that have multiple anthropogenic activities. Hence, stepwise regression (SR), multiple linear regression (MLR) and principal component regression (PCR) analyses were used to analyse daily average PM 10 concentration levels. The analyses were carried out using daily average PM 10 concentration, temperature, humidity, wind speed and wind direction data from 2006 to 2010. The data was from an industrial air quality monitoring station in Malaysia. The SR analysis established that meteorological parameters had less influence on PM 10 concentration levels having coefficient of determination (R 2 ) result from 23 to 29% based on seasoned and unseasoned analysis. While, the result of the prediction analysis showed that PCR models had a better R 2 result than MLR methods. The results for the analyses based on both seasoned and unseasoned data established that MLR models had R 2 result from 0.50 to 0.60. While, PCR models had R 2 result from 0.66 to 0.89. In addition, the validation analysis using 2016 data also recognised that the PCR model outperformed the MLR model, with the PCR model for the seasoned analysis having the best result. These analyses will aid in achieving sustainable air quality management strategies.

  3. Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.

    PubMed

    Kis, Maria

    2005-01-01

    In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.

  4. ESTIMATION OF CONSTANT AND TIME-VARYING DYNAMIC PARAMETERS OF HIV INFECTION IN A NONLINEAR DIFFERENTIAL EQUATION MODEL.

    PubMed

    Liang, Hua; Miao, Hongyu; Wu, Hulin

    2010-03-01

    Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.

  5. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  6. Combination of Alternative Models by Mutual Data Assimilation: Supermodeling With A Suite of Primitive Equation Models

    NASA Astrophysics Data System (ADS)

    Duane, G. S.; Selten, F.

    2016-12-01

    Different models of climate and weather commonly give projections/predictions that differ widely in their details. While averaging of model outputs almost always improves results, nonlinearity implies that further improvement can be obtained from model interaction in run time, as has already been demonstrated with toy systems of ODEs and idealized quasigeostrophic models. In the supermodeling scheme, models effectively assimilate data from one another and partially synchronize with one another. Spread among models is manifest as a spread in possible inter-model connection coefficients, so that the models effectively "agree to disagree". Here, we construct a supermodel formed from variants of the SPEEDO model, a primitive-equation atmospheric model (SPEEDY) coupled to ocean and land. A suite of atmospheric models, coupled to the same ocean and land, is chosen to represent typical differences among climate models by varying model parameters. Connections are introduced between all pairs of corresponding independent variables at synoptic-scale intervals. Strengths of the inter-atmospheric connections can be considered to represent inverse inter-model observation error. Connection strengths are adapted based on an established procedure that extends the dynamical equations of a pair of synchronizing systems to synchronize parameters as well. The procedure is applied to synchronize the suite of SPEEDO models with another SPEEDO model regarded as "truth", adapting the inter-model connections along the way. The supermodel with trained connections gives marginally lower error in all fields than any weighted combination of the separate model outputs when used in "weather-prediction mode", i.e. with constant nudging to truth. Stronger results are obtained if a supermodel is used to predict the formation of coherent structures or the frequency of such. Partially synchronized SPEEDO models give a better representation of the blocked-zonal index cycle than does a weighted average of the constituent model outputs. We have thus shown that supermodeling and the synchronization-based procedure to adapt inter-model connections give results superior to output averaging not only with highly nonlinear toy systems, but with smaller nonlinearities as occur in climate models.

  7. A kinetic model for estimating net photosynthetic rates of cos lettuce leaves under pulsed light.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2015-04-01

    Time-averaged net photosynthetic rate (P n) under pulsed light (PL) is known to be affected by the PL frequency and duty ratio, even though the time-averaged photosynthetic photon flux density (PPFD) is unchanged. This phenomenon can be explained by considering that photosynthetic intermediates (PIs) are pooled during light periods and then consumed by partial photosynthetic reactions during dark periods. In this study, we developed a kinetic model to estimate P n of cos lettuce (Lactuca sativa L. var. longifolia) leaves under PL based on the dynamics of the amount of pooled PIs. The model inputs are average PPFD, duty ratio, and frequency; the output is P n. The rates of both PI accumulation and consumption at a given moment are assumed to be dependent on the amount of pooled PIs at that point. Required model parameters and three explanatory variables (average PPFD, frequency, and duty ratio) were determined for the simulation using P n values under PL based on several combinations of the three variables. The model simulation for various PL levels with a wide range of time-averaged PPFDs, frequencies, and duty ratios further demonstrated that P n under PL with high frequencies and duty ratios was comparable to, but did not exceed, P n under continuous light, and also showed that P n under PL decreased as either frequency or duty ratio was decreased. The developed model can be used to estimate P n under various light environments where PPFD changes cyclically.

  8. Objective calibration of regional climate models

    NASA Astrophysics Data System (ADS)

    Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.

    2012-12-01

    Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.

  9. Accuracy of three-dimensional dental resin models created by fused deposition modeling, stereolithography, and Polyjet prototype technologies: A comparative study.

    PubMed

    Rebong, Raymund E; Stewart, Kelton T; Utreja, Achint; Ghoneima, Ahmed A

    2018-05-01

    The aim of this study was to assess the dimensional accuracy of fused deposition modeling (FDM)-, Polyjet-, and stereolithography (SLA)-produced models by comparing them to traditional plaster casts. A total of 12 maxillary and mandibular posttreatment orthodontic plaster casts were selected from the archives of the Orthodontic Department at the Indiana University School of Dentistry. Plaster models were scanned, saved as stereolithography files, and printed as physical models using three different three-dimensional (3D) printers: Makerbot Replicator (FDM), 3D Systems SLA 6000 (SLA), and Objet Eden500V (Polyjet). A digital caliper was used to obtain measurements on the original plaster models as well as on the printed resin models. Comparison between the 3D printed models and the plaster casts showed no statistically significant differences in most of the parameters. However, FDM was significantly higher on average than were plaster casts in maxillary left mixed plane (MxL-MP) and mandibular intermolar width (Md-IMW). Polyjet was significantly higher on average than were plaster casts in maxillary intercanine width (Mx-ICW), mandibular intercanine width (Md-ICW), and mandibular left mixed plane (MdL-MP). Polyjet was significantly lower on average than were plaster casts in maxillary right vertical plane (MxR-vertical), maxillary left vertical plane (MxL-vertical), mandibular right anteroposterior plane (MdR-AP), mandibular right vertical plane (MdR-vertical), and mandibular left vertical plane (MdL-vertical). SLA was significantly higher on average than were plaster casts in MxL-MP, Md-ICW, and overbite. SLA was significantly lower on average than were plaster casts in MdR-vertical and MdL-vertical. Dental models reconstructed by FDM technology had the fewest dimensional measurement differences compared to plaster models.

  10. Optical identification of subjects at high risk for developing breast cancer

    NASA Astrophysics Data System (ADS)

    Taroni, Paola; Quarto, Giovanna; Pifferi, Antonio; Ieva, Francesca; Paganoni, Anna Maria; Abbate, Francesca; Balestreri, Nicola; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

    2013-06-01

    A time-domain multiwavelength (635 to 1060 nm) optical mammography was performed on 147 subjects with recent x-ray mammograms available, and average breast tissue composition (water, lipid, collagen, oxy- and deoxyhemoglobin) and scattering parameters (amplitude a and slope b) were estimated. Correlation was observed between optically derived parameters and mammographic density [Breast Imaging and Reporting Data System (BI-RADS) categories], which is a strong risk factor for breast cancer. A regression logistic model was obtained to best identify high-risk (BI-RADS 4) subjects, based on collagen content and scattering parameters. The model presents a total misclassification error of 12.3%, sensitivity of 69%, specificity of 94%, and simple kappa of 0.84, which compares favorably even with intraradiologist assignments of BI-RADS categories.

  11. Verification of MCNP simulation of neutron flux parameters at TRIGA MK II reactor of Malaysia.

    PubMed

    Yavar, A R; Khalafi, H; Kasesaz, Y; Sarmani, S; Yahaya, R; Wood, A K; Khoo, K S

    2012-10-01

    A 3-D model for 1 MW TRIGA Mark II research reactor was simulated. Neutron flux parameters were calculated using MCNP-4C code and were compared with experimental results obtained by k(0)-INAA and absolute method. The average values of φ(th),φ(epi), and φ(fast) by MCNP code were (2.19±0.03)×10(12) cm(-2)s(-1), (1.26±0.02)×10(11) cm(-2)s(-1) and (3.33±0.02)×10(10) cm(-2)s(-1), respectively. These average values were consistent with the experimental results obtained by k(0)-INAA. The findings show a good agreement between MCNP code results and experimental results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Calibration to improve forward model simulation of microwave emissivity at GPM frequencies over the U.S. Southern Great Plains

    PubMed Central

    Harrison, Kenneth W.; Tian, Yudong; Peters-Lidard, Christa D.; Ringerud, Sarah; Kumar, Sujay V.

    2018-01-01

    Better estimation of land surface microwave emissivity promises to improve over-land precipitation retrievals in the GPM era. Forward models of land microwave emissivity are available but have suffered from poor parameter specification and limited testing. Here, forward models are calibrated and the accompanying change in predictive power is evaluated. With inputs (e.g., soil moisture) from the Noah land surface model and applying MODIS LAI data, two microwave emissivity models are tested, the Community Radiative Transfer Model (CRTM) and Community Microwave Emission Model (CMEM). The calibration is conducted with the NASA Land Information System (LIS) parameter estimation subsystem using AMSR-E based emissivity retrievals for the calibration dataset. The extent of agreement between the modeled and retrieved estimates is evaluated using the AMSR-E retrievals for a separate 7-year validation period. Results indicate that calibration can significantly improve the agreement, simulating emissivity with an across-channel average root-mean-square-difference (RMSD) of about 0.013, or about 20% lower than if relying on daily estimates based on climatology. The results also indicate that calibration of the microwave emissivity model alone, as was done in prior studies, results in as much as 12% higher across-channel average RMSD, as compared to joint calibration of the land surface and microwave emissivity models. It remains as future work to assess the extent to which the improvements in emissivity estimation translate into improvements in precipitation retrieval accuracy. PMID:29795962

  13. Modelling and analysis of creep deformation and fracture in a 1 Cr 1/2 Mo ferritic steel

    NASA Astrophysics Data System (ADS)

    Dyson, B. F.; Osgerby, D.

    A quantitative model, based upon a proposed new mechanism of creep deformation in particle-hardened alloys, has been validated by analysis of creep data from a 13CrMo 4 4 (1Cr 1/2 Mo) material tested under a range of stresses and temperatures. The methodology that has been used to extract the model parameters quantifies, as a first approximation, only the main degradation (damage) processes - in the case of the 1CR 1/2 Mo steel, these are considered to be the parallel operation of particle-coarsening and a progressively increasing stress due to a constant-load boundary condition. These 'global' model parameters can then be modified (only slightly) as required to obtain a detailed description and 'fit' to the rupture lifetime and strain/time trajectory of any individual test. The global model parameter approach may be thought of as predicting average behavior and the detailed fits as taking account of uncertainties (scatter) due to variability in the material. Using the global parameter dataset, predictions have also been made of behavior under biaxial stressing; constant straining rate; constant total strain (stress relaxation) and the likely success or otherwise of metallographic and mechanical remanent lifetime procedures.

  14. Cosmological model-independent test of ΛCDM with two-point diagnostic by the observational Hubble parameter data

    NASA Astrophysics Data System (ADS)

    Cao, Shu-Lei; Duan, Xiao-Wei; Meng, Xiao-Lei; Zhang, Tong-Jie

    2018-04-01

    Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range 0 < z ≤slant 2.36 to make a cosmological model-independent test of the ΛCDM model with two-point Omh^2(z2;z1) diagnostic. In ΛCDM model, with equation of state (EoS) w=-1, two-point diagnostic relation Omh^2 ≡ Ωmh^2 is tenable, where Ωm is the present matter density parameter, and h is the Hubble parameter divided by 100 {km s^{-1 Mpc^{-1}}}. We utilize two methods: the weighted mean and median statistics to bin the OHD to increase the signal-to-noise ratio of the measurements. The binning methods turn out to be promising and considered to be robust. By applying the two-point diagnostic to the binned data, we find that although the best-fit values of Omh^2 fluctuate as the continuous redshift intervals change, on average, they are continuous with being constant within 1 σ confidence interval. Therefore, we conclude that the ΛCDM model cannot be ruled out.

  15. Variable frame rate transmission - A review of methodology and application to narrow-band LPC speech coding

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Makhoul, J.; Schwartz, R. M.; Huggins, A. W. F.

    1982-04-01

    The variable frame rate (VFR) transmission methodology developed, implemented, and tested in the years 1973-1978 for efficiently transmitting linear predictive coding (LPC) vocoder parameters extracted from the input speech at a fixed frame rate is reviewed. With the VFR method, parameters are transmitted only when their values have changed sufficiently over the interval since their preceding transmission. Two distinct approaches to automatic implementation of the VFR method are discussed. The first bases the transmission decisions on comparisons between the parameter values of the present frame and the last transmitted frame. The second, which is based on a functional perceptual model of speech, compares the parameter values of all the frames that lie in the interval between the present frame and the last transmitted frame against a linear model of parameter variation over that interval. Also considered is the application of VFR transmission to the design of narrow-band LPC speech coders with average bit rates of 2000-2400 bts/s.

  16. Upper limb strength estimation of physically impaired persons using a musculoskeletal model: A sensitivity analysis.

    PubMed

    Carmichael, Marc G; Liu, Dikai

    2015-01-01

    Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.

  17. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  18. String model for the dynamics of glass-forming liquids

    PubMed Central

    Pazmiño Betancourt, Beatriz A.; Douglas, Jack F.; Starr, Francis W.

    2014-01-01

    We test the applicability of a living polymerization theory to describe cooperative string-like particle rearrangement clusters (strings) observed in simulations of a coarse-grained polymer melt. The theory quantitatively describes the interrelation between the average string length L, configurational entropy Sconf, and the order parameter for string assembly Φ without free parameters. Combining this theory with the Adam-Gibbs model allows us to predict the relaxation time τ in a lower temperature T range than accessible by current simulations. In particular, the combined theories suggest a return to Arrhenius behavior near Tg and a low T residual entropy, thus avoiding a Kauzmann “entropy crisis.” PMID:24880303

  19. String model for the dynamics of glass-forming liquids.

    PubMed

    Pazmiño Betancourt, Beatriz A; Douglas, Jack F; Starr, Francis W

    2014-05-28

    We test the applicability of a living polymerization theory to describe cooperative string-like particle rearrangement clusters (strings) observed in simulations of a coarse-grained polymer melt. The theory quantitatively describes the interrelation between the average string length L, configurational entropy Sconf, and the order parameter for string assembly Φ without free parameters. Combining this theory with the Adam-Gibbs model allows us to predict the relaxation time τ in a lower temperature T range than accessible by current simulations. In particular, the combined theories suggest a return to Arrhenius behavior near Tg and a low T residual entropy, thus avoiding a Kauzmann "entropy crisis."

  20. Concerning the relationship between evapotranspiration and soil moisture

    NASA Technical Reports Server (NTRS)

    Wetzel, Peter J.; Chang, Jy-Tai

    1987-01-01

    The relationship between the evapotranspiration and soil moisture during the drying, supply-limited phase is studied. A second scaling parameter, based on the evapotranspirational supply and demand concept of Federer (1982), is defined; the parameter, referred to as the threshold evapotranspiration, occurs in vegetation-covered surfaces just before leaf stomata close and when surface tension restricts moisture release from bare soil pores. A simple model for evapotranspiration is proposed. The effects of natural soil heterogeneities on evapotranspiration computed from the model are investigated. It is observed that the natural variability in soil moisture, caused by the heterogeneities, alters the relationship between regional evapotranspiration and the area average soil moisture.

  1. An analytical approach to obtaining JWL parameters from cylinder tests

    NASA Astrophysics Data System (ADS)

    Sutton, B. D.; Ferguson, J. W.; Hodgson, A. N.

    2017-01-01

    An analytical method for determining parameters for the JWL Equation of State from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated pressure-relative volume (p-Vr) curves agree with those produced by hydro-code modelling. The average calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-relative volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-Vr curve. The calculated energy is within 1.6% of that predicted by the model.

  2. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  3. Bayesian inversion of refraction seismic traveltime data

    NASA Astrophysics Data System (ADS)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.

  4. Improved Determination of the Myelin Water Fraction in Human Brain using Magnetic Resonance Imaging through Bayesian Analysis of mcDESPOT

    PubMed Central

    Bouhrara, Mustapha; Spencer, Richard G.

    2015-01-01

    Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian methods as compared to the stochastic region contraction (SRC) implementation of NLLS. PMID:26499810

  5. Model behavior and sensitivity in an application of the cohesive bed component of the community sediment transport modeling system for the York River estuary, VA, USA

    USGS Publications Warehouse

    Fall, Kelsey A.; Harris, Courtney K.; Friedrichs, Carl T.; Rinehimer, J. Paul; Sherwood, Christopher R.

    2014-01-01

    The Community Sediment Transport Modeling System (CSTMS) cohesive bed sub-model that accounts for erosion, deposition, consolidation, and swelling was implemented in a three-dimensional domain to represent the York River estuary, Virginia. The objectives of this paper are to (1) describe the application of the three-dimensional hydrodynamic York Cohesive Bed Model, (2) compare calculations to observations, and (3) investigate sensitivities of the cohesive bed sub-model to user-defined parameters. Model results for summer 2007 showed good agreement with tidal-phase averaged estimates of sediment concentration, bed stress, and current velocity derived from Acoustic Doppler Velocimeter (ADV) field measurements. An important step in implementing the cohesive bed model was specification of both the initial and equilibrium critical shear stress profiles, in addition to choosing other parameters like the consolidation and swelling timescales. This model promises to be a useful tool for investigating the fundamental controls on bed erodibility and settling velocity in the York River, a classical muddy estuary, provided that appropriate data exists to inform the choice of model parameters.

  6. Microthrix parvicella abundance associates with activated sludge settling velocity and rheology - Quantifying and modelling filamentous bulking.

    PubMed

    Wágner, Dorottya S; Ramin, Elham; Szabo, Peter; Dechesne, Arnaud; Plósz, Benedek Gy

    2015-07-01

    The objective of this work is to identify relevant settling velocity and rheology model parameters and to assess the underlying filamentous microbial community characteristics that can influence the solids mixing and transport in secondary settling tanks. Parameter values for hindered, transient and compression settling velocity functions were estimated by carrying out biweekly batch settling tests using a novel column setup through a four-month long measurement campaign. To estimate viscosity model parameters, rheological experiments were carried out on the same sludge sample using a rotational viscometer. Quantitative fluorescence in-situ hybridisation (qFISH) analysis, targeting Microthrix parvicella and phylum Chloroflexi, was used. This study finds that M. parvicella - predominantly residing inside the microbial flocs in our samples - can significantly influence secondary settling through altering the hindered settling velocity and yield stress parameter. Strikingly, this is not the case for Chloroflexi, occurring in more than double the abundance of M. parvicella, and forming filaments primarily protruding from the flocs. The transient and compression settling parameters show a comparably high variability, and no significant association with filamentous abundance. A two-dimensional, axi-symmetrical computational fluid dynamics (CFD) model was used to assess calibration scenarios to model filamentous bulking. Our results suggest that model predictions can significantly benefit from explicitly accounting for filamentous bulking by calibrating the hindered settling velocity function. Furthermore, accounting for the transient and compression settling velocity in the computational domain is crucial to improve model accuracy when modelling filamentous bulking. However, the case-specific calibration of transient and compression settling parameters as well as yield stress is not necessary, and an average parameter set - obtained under bulking and good settling conditions - can be used. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  8. Parameterization and prediction of nanoparticle transport in porous media: A reanalysis using artificial neural network

    NASA Astrophysics Data System (ADS)

    Babakhani, Peyman; Bridge, Jonathan; Doong, Ruey-an; Phenrat, Tanapon

    2017-06-01

    The continuing rapid expansion of industrial and consumer processes based on nanoparticles (NP) necessitates a robust model for delineating their fate and transport in groundwater. An ability to reliably specify the full parameter set for prediction of NP transport using continuum models is crucial. In this paper we report the reanalysis of a data set of 493 published column experiment outcomes together with their continuum modeling results. Experimental properties were parameterized into 20 factors which are commonly available. They were then used to predict five key continuum model parameters as well as the effluent concentration via artificial neural network (ANN)-based correlations. The Partial Derivatives (PaD) technique and Monte Carlo method were used for the analysis of sensitivities and model-produced uncertainties, respectively. The outcomes shed light on several controversial relationships between the parameters, e.g., it was revealed that the trend of Katt with average pore water velocity was positive. The resulting correlations, despite being developed based on a "black-box" technique (ANN), were able to explain the effects of theoretical parameters such as critical deposition concentration (CDC), even though these parameters were not explicitly considered in the model. Porous media heterogeneity was considered as a parameter for the first time and showed sensitivities higher than those of dispersivity. The model performance was validated well against subsets of the experimental data and was compared with current models. The robustness of the correlation matrices was not completely satisfactory, since they failed to predict the experimental breakthrough curves (BTCs) at extreme values of ionic strengths.

  9. Continuous piecewise-linear, reduced-order electrochemical model for lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid

    2017-02-01

    Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.

  10. A point-infiltration model for estimating runoff from rainfall on small basins in semiarid areas of Wyoming

    USGS Publications Warehouse

    Rankl, James G.

    1990-01-01

    A physically based point-infiltration model was developed for computing infiltration of rainfall into soils and the resulting runoff from small basins in Wyoming. The user describes a 'design storm' in terms of average rainfall intensity and storm duration. Information required to compute runoff for the design storm by using the model include (1) soil type and description, and (2) two infiltration parameters and a surface-retention storage parameter. Parameter values are tabulated in the report. Rainfall and runoff data for three ephemeral-stream basins that contain only one type of soil were used to develop the model. Two assumptions were necessary: antecedent soil moisture is some long-term average, and storm rainfall is uniform in both time and space. The infiltration and surface-retention storage parameters were determined for the soil of each basin. Observed rainstorm and runoff data were used to develop a separation curve, or incipient-runoff curve, which distinguishes between runoff and nonrunoff rainfall data. The position of this curve defines the infiltration and surface-retention storage parameters. A procedure for applying the model to basins that contain more than one type of soil was developed using data from 7 of the 10 study basins. For these multiple-soil basins, the incipient-runoff curve defines the infiltration and retention-storage parameters for the soil having the highest runoff potential. Parameters were defined by ranking the soils according to their relative permeabilities and optimizing the position of the incipient-runoff curve by using measured runoff as a control for the fit. Analyses of runoff from multiple-soil basins indicate that the effective contributing area of runoff is less than the drainage area of the basin. In this study, the effective drainage area ranged from 41.6 to 71.1 percent of the total drainage area. Information on effective drainage area is useful in evaluating drainage area as an independent variable in statistical analyses of hydrologic data, such as annual peak frequency distributions and sediment yield.A comparison was made of the sum of the simulated runoff and the sum of the measured runoff for all available records of runoff-producing storms in the 10 study basins. The sums of the simulated runoff ranged from 12.0 percent less than to 23.4 percent more than the sums of the measured runoff. A measure of the standard error of estimate was computed for each data set. These values ranged from 20 to 70 percent of the mean value of the measured runoff. Rainfall-simulator infiltrometer tests were made in two small basins. The amount of water uptake measured by the test in Dugout Creek tributary basin averaged about three times greater than the amount of water uptake computed from rainfall and runoff data. Therefore, infiltrometer data were not used to determine infiltration rates for this study.

  11. Nonlinear consolidation in randomly heterogeneous highly compressible aquitards

    NASA Astrophysics Data System (ADS)

    Zapata-Norberto, Berenice; Morales-Casique, Eric; Herrera, Graciela S.

    2018-05-01

    Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. The effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards is investigated by means of one-dimensional Monte Carlo numerical simulations where the lower boundary represents the effect of an instant drop in hydraulic head due to groundwater pumping. Two thousand realizations are generated for each of the following parameters: hydraulic conductivity ( K), compression index ( C c), void ratio ( e) and m (an empirical parameter relating hydraulic conductivity and void ratio). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system when compared to a nonlinear consolidation model with deterministic initial parameters. The deterministic solution underestimates the ensemble average of total settlement when initial K is random. In addition, random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady-state conditions.

  12. Development of a calibration protocol and identification of the most sensitive parameters for the particulate biofilm models used in biological wastewater treatment.

    PubMed

    Eldyasti, Ahmed; Nakhla, George; Zhu, Jesse

    2012-05-01

    Biofilm models are valuable tools for process engineers to simulate biological wastewater treatment. In order to enhance the use of biofilm models implemented in contemporary simulation software, model calibration is both necessary and helpful. The aim of this work was to develop a calibration protocol of the particulate biofilm model with a help of the sensitivity analysis of the most important parameters in the biofilm model implemented in BioWin® and verify the predictability of the calibration protocol. A case study of a circulating fluidized bed bioreactor (CFBBR) system used for biological nutrient removal (BNR) with a fluidized bed respirometric study of the biofilm stoichiometry and kinetics was used to verify and validate the proposed calibration protocol. Applying the five stages of the biofilm calibration procedures enhanced the applicability of BioWin®, which was capable of predicting most of the performance parameters with an average percentage error (APE) of 0-20%. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Improved estimation of hydraulic conductivity by combining stochastically simulated hydrofacies with geophysical data.

    PubMed

    Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao

    2016-03-01

    Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.

  14. Desorption kinetics of hydrophobic organic chemicals from sediment to water: a review of data and models.

    PubMed

    Birdwell, Justin; Cook, Robert L; Thibodeaux, Louis J

    2007-03-01

    Resuspension of contaminated sediment can lead to the release of toxic compounds to surface waters where they are more bioavailable and mobile. Because the timeframe of particle resettling during such events is shorter than that needed to reach equilibrium, a kinetic approach is required for modeling the release process. Due to the current inability of common theoretical approaches to predict site-specific release rates, empirical algorithms incorporating the phenomenological assumption of biphasic, or fast and slow, release dominate the descriptions of nonpolar organic chemical release in the literature. Two first-order rate constants and one fraction are sufficient to characterize practically all of the data sets studied. These rate constants were compared to theoretical model parameters and functionalities, including chemical properties of the contaminants and physical properties of the sorbents, to determine if the trends incorporated into the hindered diffusion model are consistent with the parameters used in curve fitting. The results did not correspond to the parameter dependence of the hindered diffusion model. No trend in desorption rate constants, for either fast or slow release, was observed to be dependent on K(OC) or aqueous solubility for six and seven orders of magnitude, respectively. The same was observed for aqueous diffusivity and sediment fraction organic carbon. The distribution of kinetic rate constant values was approximately log-normal, ranging from 0.1 to 50 d(-1) for the fast release (average approximately 5 d(-1)) and 0.0001 to 0.1 d(-1) for the slow release (average approximately 0.03 d(-1)). The implications of these findings with regard to laboratory studies, theoretical desorption process mechanisms, and water quality modeling needs are presented and discussed.

  15. ModelTest Server: a web-based tool for the statistical selection of models of nucleotide substitution online

    PubMed Central

    Posada, David

    2006-01-01

    ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102

  16. Diffraction peak profiles of surface relaxed spherical nanocrystals

    NASA Astrophysics Data System (ADS)

    Perez-Demydenko, C.; Scardi, P.

    2017-09-01

    A model is proposed for surface relaxation of spherical nanocrystals. Besides reproducing the primary effect of changing the average unit cell parameter, the model accounts for the inhomogeneous atomic displacement caused by surface relaxation and its effect on the diffraction line profiles. Based on three parameters with clear physical meanings - extension of the sub-coordination effect, maximum radial displacement due to sub-coordination, and effective hydrostatic pressure - the model also considers elastic anisotropy and provides parametric expressions of the diffraction line profiles directly applicable in data analysis. The model was tested on spherical nanocrystals of several fcc metals, matching atomic positions with those provided by Molecular Dynamics (MD) simulations based on embedded atom potentials. Agreement was also verified between powder diffraction patterns generated by the Debye scattering equation, using atomic positions from MD and the proposed model.

  17. Quasar microlensing models with constraints on the Quasar light curves

    NASA Astrophysics Data System (ADS)

    Tie, S. S.; Kochanek, C. S.

    2018-01-01

    Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.

  18. Study of market model describing the contrary behaviors of informed and uninformed agents: Being minority and being majority

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Xia; Liao, Hao; Medo, Matus; Shang, Ming-Sheng; Yeung, Chi Ho

    2016-05-01

    In this paper we analyze the contrary behaviors of the informed investors and uniformed investors, and then construct a competition model with two groups of agents, namely agents who intend to stay in minority and those who intend to stay in majority. We find two kinds of competitions, inter- and intra-groups. The model shows periodic fluctuation feature. The average distribution of strategies illustrates a prominent central peak which is relevant to the peak-fat-tail character of price change distribution in stock markets. Furthermore, in the modified model the tolerance time parameter makes the agents diversified. Finally, we compare the strategies distribution with the price change distribution in real stock market, and we conclude that contrary behavior rules and tolerance time parameter are indeed valid in the description of market model.

  19. Greenhouse effect in the atmosphere

    NASA Astrophysics Data System (ADS)

    Smirnov, B. M.

    2016-04-01

    Average optical atmospheric parameters for the infrared spectrum range are evaluated on the basis of the Earth energetic balance and parameters of the standard atmosphere. The average optical thickness of the atmosphere is u ≈ 2.5 and this atmospheric emission is originated at altitudes below 10 km. Variations of atmospheric radiative fluxes towards the Earth and outward are calculated as a function of the concentration of \\text{CO}2 molecules for the regular model of molecular spectrum. As a result of doubling of the \\text{CO}2 concentration the change of the global Earth temperature is (0.4 +/- 0.2) \\text{K} if other atmospheric parameters are conserved compared to the value (3.0 +/- 1.5) \\text{K} under real atmospheric conditions with the variation of the amount of atmospheric water. An observed variation of the global Earth temperature during the last century (0.8 ^\\circ \\text{C}) follows from an increase of the mass of atmospheric water by 7% or by conversion of 1% of atmospheric water in aerosols.

  20. Linear modeling of human hand-arm dynamics relevant to right-angle torque tool interaction.

    PubMed

    Ay, Haluk; Sommerich, Carolyn M; Luscher, Anthony F

    2013-10-01

    A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use. Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury. A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment. A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture. Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain. Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).

  1. The impact of lateral variations in lithospheric thickness on glacial isostatic adjustment in West Antarctica

    NASA Astrophysics Data System (ADS)

    Nield, Grace A.; Whitehouse, Pippa L.; van der Wal, Wouter; Blank, Bas; O'Donnell, John Paul; Stuart, Graham W.

    2018-04-01

    Differences in predictions of Glacial Isostatic Adjustment (GIA) for Antarctica persist due to uncertainties in deglacial history and Earth rheology. The Earth models adopted in many GIA studies are defined by parameters that vary in the radial direction only and represent a global average Earth structure (referred to as 1D Earth models). Over-simplifying actual Earth structure leads to bias in model predictions in regions where Earth parameters differ significantly from the global average, such as West Antarctica. We investigate the impact of lateral variations in lithospheric thickness on GIA in Antarctica by carrying out two experiments that use different rheological approaches to define 3D Earth models that include spatial variations in lithospheric thickness. The first experiment defines an elastic lithosphere with spatial variations in thickness inferred from seismic studies. We compare the results from this 3D model with results derived from a 1D Earth model that has a uniform lithospheric thickness defined as the average of the 3D lithospheric thickness. Irrespective of deglacial history and sub-lithospheric mantle viscosity, we find higher gradients of present-day uplift rates (i.e. higher amplitude and shorter wavelength) in West Antarctica when using the 3D models, due to the thinner-than-1D-average lithosphere prevalent in this region. The second experiment uses seismically-inferred temperature as input to a power-law rheology thereby allowing the lithosphere to have a viscosity structure. Modelling the lithosphere with a power-law rheology results in behaviour that is equivalent to a thinner-lithosphere model, and it leads to higher amplitude and shorter wavelength deformation compared with the first experiment. We conclude that neglecting spatial variations in lithospheric thickness in GIA models will result in predictions of peak uplift and subsidence that are biased low in West Antarctica. This has important implications for ice-sheet modelling studies as the steeper gradients of uplift predicted from the more realistic 3D model may promote stability in marine-grounded regions of West Antarctica. Including lateral variations in lithospheric thickness, at least to the level of considering West and East Antarctica separately, is important for capturing short wavelength deformation and it has the potential to provide a better fit to GPS observations as well as an improved GIA correction for GRACE data.

  2. Chemical short-range order and lattice deformations in MgyTi1-yHx thin films probed by hydrogenography

    NASA Astrophysics Data System (ADS)

    Gremaud, R.; Baldi, A.; Gonzalez-Silveira, M.; Dam, B.; Griessen, R.

    2008-04-01

    A multisite lattice gas approach is used to model pressure-optical-transmission isotherms (PTIs) recorded by hydrogenography on MgyTi1-yHx sputtered thin films. The model reproduces the measured PTIs well and allows us to determine the chemical short-range order parameter s . The s values are in good agreement with those determined from extended x-ray absorption fine structure measurements. Additionally, the PTI multisite modeling yields a parameter L that accounts for the local lattice deformations with respect to the average MgyTi1-y lattice given by Vegard’s law. It is thus possible to extract two essential characteristics of a metastable alloy from hydrogenographic data.

  3. Parameter estimation of an ARMA model for river flow forecasting using goal programming

    NASA Astrophysics Data System (ADS)

    Mohammadi, Kourosh; Eslami, H. R.; Kahawita, Rene

    2006-11-01

    SummaryRiver flow forecasting constitutes one of the most important applications in hydrology. Several methods have been developed for this purpose and one of the most famous techniques is the Auto regressive moving average (ARMA) model. In the research reported here, the goal was to minimize the error for a specific season of the year as well as for the complete series. Goal programming (GP) was used to estimate the ARMA model parameters. Shaloo Bridge station on the Karun River with 68 years of observed stream flow data was selected to evaluate the performance of the proposed method. The results when compared with the usual method of maximum likelihood estimation were favorable with respect to the new proposed algorithm.

  4. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Co-Axial Supersonic Free-Jet Experiment

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Edwards, J. R.

    2009-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The baseline value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was noted when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid simulation results showed the same trends as the baseline Reynolds-averaged predictions. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions are suggested as a remedy to this dilemma. Comparisons between resolved second-order turbulence statistics and their modeled Reynolds-averaged counterparts were also performed.

  5. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verdin, Kristine L.

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from themore » EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.« less

  6. Urban stream syndrome in a small, lightly developed watershed: a statistical analysis of water chemistry parameters, land use patterns, and natural sources.

    PubMed

    Halstead, Judith A; Kliman, Sabrina; Berheide, Catherine White; Chaucer, Alexander; Cock-Esteb, Alicea

    2014-06-01

    The relationships among land use patterns, geology, soil, and major solute concentrations in stream water for eight tributaries of the Kayaderosseras Creek watershed in Saratoga County, NY, were investigated using Pearson correlation coefficients and multivariate regression analysis. Sub-watersheds corresponding to each sampling site were delineated, and land use patterns were determined for each of the eight sub-watersheds using GIS. Four land use categories (urban development, agriculture, forests, and wetlands) constituted more than 99 % of the land in the sub-watersheds. Eleven water chemistry parameters were highly and positively correlated with each other and urban development. Multivariate regression models indicated urban development was the most powerful predictor for the same eleven parameters (conductivity, TN, TP, NO[Formula: see text], Cl(-), HCO(-)3, SO9(2-)4, Na(+), K(+), Ca(2+), and Mg(2+)). Adjusted R(2) values, ranging from 19 to 91 %, indicated that these models explained an average of 64 % of the variance in these 11 parameters across the samples and 70 % when Mg(2+) was omitted. The more common R (2), ranging from 29 to 92 %, averaged 68 % for these 11 parameters and 72 % when Mg(2+) was omitted. Water quality improved most with forest coverage in stream watersheds. The strong associations between water quality variables and urban development indicated an urban source for these 11 water quality parameters at all eight sampling sites was likely, suggesting that urban stream syndrome can be detected even on a relatively small scale in a lightly developed area. Possible urban sources of Ca(2+) and HCO(-)3 are suggested.

  7. Granger causality for state-space models

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Seth, Anil K.

    2015-04-01

    Granger causality has long been a prominent method for inferring causal interactions between stochastic variables for a broad range of complex physical systems. However, it has been recognized that a moving average (MA) component in the data presents a serious confound to Granger causal analysis, as routinely performed via autoregressive (AR) modeling. We solve this problem by demonstrating that Granger causality may be calculated simply and efficiently from the parameters of a state-space (SS) model. Since SS models are equivalent to autoregressive moving average models, Granger causality estimated in this fashion is not degraded by the presence of a MA component. This is of particular significance when the data has been filtered, downsampled, observed with noise, or is a subprocess of a higher dimensional process, since all of these operations—commonplace in application domains as diverse as climate science, econometrics, and the neurosciences—induce a MA component. We show how Granger causality, conditional and unconditional, in both time and frequency domains, may be calculated directly from SS model parameters via solution of a discrete algebraic Riccati equation. Numerical simulations demonstrate that Granger causality estimators thus derived have greater statistical power and smaller bias than AR estimators. We also discuss how the SS approach facilitates relaxation of the assumptions of linearity, stationarity, and homoscedasticity underlying current AR methods, thus opening up potentially significant new areas of research in Granger causal analysis.

  8. The maximum depth of shower with E sub 0 larger than 10(17) eV on average characteristics of EAS different components

    NASA Technical Reports Server (NTRS)

    Glushkov, A. V.; Efimov, N. N.; Makarov, I. T.; Pravdin, M. I.; Dedenko, L. G.

    1985-01-01

    The extensive air shower (EAS) development model independent method of the determination of a maximum depth of shower (X sub m) is considered. X sub m values obtained on various EAS parameters are in a good agreement.

  9. ELECTRICAL AEROSOL DETECTOR (EAD) MEASUREMENTS AT THE ST. LOUIS SUPERSITE

    EPA Science Inventory

    The Model 3070A Electrical Aerosol Detector (EAD) measures a unique aerosol parameter called total aerosol length. Reported as mm/cm3, aerosol length can be thought of as a number concentration times average diameter, or simply as d1 weighting. This measurement falls between nu...

  10. New York Bight Study. Report 1. Hydrodynamic Modeling

    DTIC Science & Technology

    1994-08-01

    function of time. Values of these parameters, averaged daily, were computed from meteorological data recorded at the John F. Kennedy ( JFK ) Airport for...Island Sound "exchange coefficient values were obtained as before from meteorological data collected at the JFK Airport . They are shown in Figures 62-63

  11. Average pollutant concentration in soil profile simulated with Convective-Dispersive Equation. Model and Manual

    USDA-ARS?s Scientific Manuscript database

    Different parts of soil solution move with different velocities, and therefore chemicals are leached gradually from soil with infiltrating water. Solute dispersivity is the soil parameter characterizing this phenomenon. To characterize the dispersivity of soil profile at field scale, it is desirable...

  12. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    PubMed

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  13. Sensitivity of Tumor Motion Simulation Accuracy to Lung Biomechanical Modeling Approaches and Parameters

    PubMed Central

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the Neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right (LR), anterior-posterior (AP), and superior-inferior (SI) directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. PMID:26531324

  14. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  15. Photohadronic scenario in interpreting the February-March 2014 flare of 1ES 1011+496

    NASA Astrophysics Data System (ADS)

    Sahu, Sarira; de León, Alberto Rosales; Miranda, Luis Salvador

    2017-11-01

    The extraordinary multi-TeV flare from 1ES 1011+496 during February-March 2014 was observed by the MAGIC telescopes for 17 nights and the average spectrum of the whole period has a non-trivial shape. We have used the photohadronic model and a template extragalactic background light model to explain the average spectrum which fits the flare data well. The spectral index α is the only free parameter in our model. We have also shown that the non-trivial nature of the spectrum is due to the change in the behavior of the optical depth above ˜ 600 GeV γ -ray energy accompanied with the high SSC flux. This corresponds to an almost flat intrinsic flux for the multi-TeV γ -rays. Our model prediction can constrain the SSC flux of the leptonic models in the quiescent state.

  16. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    PubMed

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  17. Updated Bs-mixing constraints on new physics models for b →s ℓ+ℓ- anomalies

    NASA Astrophysics Data System (ADS)

    Di Luzio, Luca; Kirk, Matthew; Lenz, Alexander

    2018-05-01

    Many new physics models that explain the intriguing anomalies in the b -quark flavor sector are severely constrained by Bs mixing, for which the Standard Model prediction and experiment agreed well until recently. The most recent Flavour Lattice Averaging Group (FLAG) average of lattice results for the nonperturbative matrix elements points, however, in the direction of a small discrepancy in this observable Cabibbo-Kobayashi-Maskawa (CKM). Using up-to-date inputs from standard sources such as PDG, FLAG and one of the two leading CKM fitting groups to determine Δ MsSM, we find a severe reduction of the allowed parameter space of Z' and leptoquark models explaining the B anomalies. Remarkably, in the former case the upper bound on the Z' mass approaches dangerously close to the energy scales already probed by the LHC. We finally identify some model-building directions in order to alleviate the tension with Bs mixing.

  18. Analysis of the influence of handset phone position on RF exposure of brain tissue.

    PubMed

    Ghanmi, Amal; Varsier, Nadège; Hadjem, Abdelhamid; Conil, Emmanuelle; Picon, Odile; Wiart, Joe

    2014-12-01

    Exposure to mobile phone radio frequency (RF) electromagnetic fields depends on many different parameters. For epidemiological studies investigating the risk of brain cancer linked to RF exposure from mobile phones, it is of great interest to characterize brain tissue exposure and to know which parameters this exposure is sensitive to. One such parameter is the position of the phone during communication. In this article, we analyze the influence of the phone position on the brain exposure by comparing the specific absorption rate (SAR) induced in the head by two different mobile phone models operating in Global System for Mobile Communications (GSM) frequency bands. To achieve this objective, 80 different phone positions were chosen using an experiment based on the Latin hypercube sampling (LHS) to select a representative set of positions. The averaged SAR over 10 g (SAR10 g) in the head, the averaged SAR over 1 g (SAR1 g ) in the brain, and the averaged SAR in different anatomical brain structures were estimated at 900 and 1800 MHz for the 80 positions. The results illustrate that SAR distributions inside the brain area are sensitive to the position of the mobile phone relative to the head. The results also show that for 5-10% of the studied positions the SAR10 g in the head and the SAR1 g in the brain can be 20% higher than the SAR estimated for the standard cheek position and that the Specific Anthropomorphic Mannequin (SAM) model is conservative for 95% of all the studied positions. © 2014 Wiley Periodicals, Inc.

  19. Modelling and tuning for a time-delayed vibration absorber with friction

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxu; Xu, Jian; Ji, Jinchen

    2018-06-01

    This paper presents an integrated analytical and experimental study to the modelling and tuning of a time-delayed vibration absorber (TDVA) with friction. In system modelling, this paper firstly applies the method of averaging to obtain the frequency response function (FRF), and then uses the derived FRF to evaluate the fitness of different friction models. After the determination of the system model, this paper employs the obtained FRF to evaluate the vibration absorption performance with respect to tunable parameters. A significant feature of the TDVA with friction is that its stability is dependent on the excitation parameters. To ensure the stability of the time-delayed control, this paper defines a sufficient condition for stability estimation. Experimental measurements show that the dynamic response of the TDVA with friction can be accurately predicted and the time-delayed control can be precisely achieved by using the modelling and tuning technique provided in this paper.

  20. Evaluation of spectral domain optical coherence tomography parameters in ocular hypertension, preperimetric, and early glaucoma.

    PubMed

    Aydogan, Tuğba; Akçay, BetÜl İlkay Sezgin; Kardeş, Esra; Ergin, Ahmet

    2017-11-01

    The objective of this study is to evaluate the diagnostic ability of retinal nerve fiber layer (RNFL), macular, optic nerve head (ONH) parameters in healthy subjects, ocular hypertension (OHT), preperimetric glaucoma (PPG), and early glaucoma (EG) patients, to reveal factors affecting the diagnostic ability of spectral domain-optical coherence tomography (SD-OCT) parameters and risk factors for glaucoma. Three hundred and twenty-six eyes (89 healthy, 77 OHT, 94 PPG, and 66 EG eyes) were analyzed. RNFL, macular, and ONH parameters were measured with SD-OCT. The area under the receiver operating characteristic curve (AUC) and sensitivity at 95% specificity was calculated. Logistic regression analysis was used to determine the glaucoma risk factors. Receiver operating characteristic regression analysis was used to evaluate the influence of covariates on the diagnostic ability of parameters. In PPG patients, parameters that had the largest AUC value were average RNFL thickness (0.83) and rim volume (0.83). In EG patients, parameter that had the largest AUC value was average RNFL thickness (0.98). The logistic regression analysis showed average RNFL thickness was a risk factor for both PPG and EG. Diagnostic ability of average RNFL and average ganglion cell complex thickness increased as disease severity increased. Signal strength index did not affect diagnostic abilities. Diagnostic ability of average RNFL and rim area increased as disc area increased. When evaluating patients with glaucoma, patients at risk for glaucoma, and healthy controls RNFL parameters deserve more attention in clinical practice. Further studies are needed to fully understand the influence of covariates on the diagnostic ability of OCT parameters.

  1. Squids in the Study of Cerebral Magnetic Field

    NASA Astrophysics Data System (ADS)

    Romani, G. L.; Narici, L.

    The following sections are included: * INTRODUCTION * HISTORICAL OVERVIEW * NEUROMAGNETIC FIELDS AND AMBIENT NOISE * DETECTORS * Room temperature sensors * SQUIDs * DETECTION COILS * Magnetometers * Gradiometers * Balancing * Planar gradiometers * Choice of the gradiometer parameters * MODELING * Current pattern due to neural excitations * Action potentials and postsynaptic currents * The current dipole model * Neural population and detected fields * Spherically bounded medium * SPATIAL CONFIGURATION OF THE SENSORS * SOURCE LOCALIZATION * Localization procedure * Experimental accuracy and reproducibility * SIGNAL PROCESSING * Analog Filtering * Bandpass filters * Line rejection filters * DATA ANALYSIS * Analysis of evoked/event-related responses * Simple average * Selected average * Recursive techniques * Similarity analysis * Analysis of spontaneous activity * Mapping and localization * EXAMPLES OF NEUROMAGNETIC STUDIES * Neuromagnetic measurements * Studies on the normal brain * Clinical applications * Epilepsy * Tinnitus * CONCLUSIONS * ACKNOWLEDGEMENTS * REFERENCES

  2. Spatial Interpretation of Tower, Chamber and Modelled Terrestrial Fluxes in a Tropical Forest Plantation

    NASA Astrophysics Data System (ADS)

    Whidden, E.; Roulet, N.

    2003-04-01

    Interpretation of a site average terrestrial flux may be complicated in the presence of inhomogeneities. Inhomogeneity may invalidate the basic assumptions of aerodynamic flux measurement. Chamber measurement may miss or misinterpret important temporal or spatial anomalies. Models may smooth over important nonlinearities depending on the scale of application. Although inhomogeneity is usually seen as a design problem, many sites have spatial variance that may have a large impact on net flux, and in many cases a large homogeneous surface is unrealistic. The sensitivity and validity of a site average flux are investigated in the presence of an inhomogeneous site. Directional differences are used to evaluate the validity of aerodynamic methods and the computation of a site average tower flux. Empirical and modelling methods are used to interpret the spatial controls on flux. An ecosystem model, Ecosys, is used to assess spatial length scales appropriate to the ecophysiologic controls. A diffusion model is used to compare tower, chamber, and model data, by spatially weighting contributions within the tower footprint. Diffusion model weighting is also used to improve tower flux estimates by producing footprint averaged ecological parameters (soil moisture, soil temperature, etc.). Although uncertainty remains in the validity of measurement methods and the accuracy of diffusion models, a detailed spatial interpretation is required at an inhomogeneous site. Flux estimation between methods improves with spatial interpretation, showing the importance to an estimation of a site average flux. Small-scale temporal and spatial anomalies may be relatively unimportant to overall flux, but accounting for medium-scale differences in ecophysiological controls is necessary. A combination of measurements and modelling can be used to define the appropriate time and length scales of significant non-linearity due to inhomogeneity.

  3. Comparison of region-of-interest-averaged and pixel-averaged analysis of DCE-MRI data based on simulations and pre-clinical experiments

    NASA Astrophysics Data System (ADS)

    He, Dianning; Zamora, Marta; Oto, Aytekin; Karczmar, Gregory S.; Fan, Xiaobing

    2017-09-01

    Differences between region-of-interest (ROI) and pixel-by-pixel analysis of dynamic contrast enhanced (DCE) MRI data were investigated in this study with computer simulations and pre-clinical experiments. ROIs were simulated with 10, 50, 100, 200, 400, and 800 different pixels. For each pixel, a contrast agent concentration as a function of time, C(t), was calculated using the Tofts DCE-MRI model with randomly generated physiological parameters (K trans and v e) and the Parker population arterial input function. The average C(t) for each ROI was calculated and then K trans and v e for the ROI was extracted. The simulations were run 100 times for each ROI with new K trans and v e generated. In addition, white Gaussian noise was added to C(t) with 3, 6, and 12 dB signal-to-noise ratios to each C(t). For pre-clinical experiments, Copenhagen rats (n  =  6) with implanted prostate tumors in the hind limb were used in this study. The DCE-MRI data were acquired with a temporal resolution of ~5 s in a 4.7 T animal scanner, before, during, and after a bolus injection (<5 s) of Gd-DTPA for a total imaging duration of ~10 min. K trans and v e were calculated in two ways: (i) by fitting C(t) for each pixel, and then averaging the pixel values over the entire ROI, and (ii) by averaging C(t) over the entire ROI, and then fitting averaged C(t) to extract K trans and v e. The simulation results showed that in heterogeneous ROIs, the pixel-by-pixel averaged K trans was ~25% to ~50% larger (p  <  0.01) than the ROI-averaged K trans. At higher noise levels, the pixel-averaged K trans was greater than the ‘true’ K trans, but the ROI-averaged K trans was lower than the ‘true’ K trans. The ROI-averaged K trans was closer to the true K trans than pixel-averaged K trans for high noise levels. In pre-clinical experiments, the pixel-by-pixel averaged K trans was ~15% larger than the ROI-averaged K trans. Overall, with the Tofts model, the extracted physiological parameters from the pixel-by-pixel averages were larger than the ROI averages. These differences were dependent on the heterogeneity of the ROI.

  4. Modeling Potential Climatic Treeline of Great Basin Bristlecone Pine in the Snake Mountain Range, Nevada, USA

    NASA Astrophysics Data System (ADS)

    Bruening, J. M.; Tran, T. J.; Bunn, A. G.; Salzer, M. W.; Weiss, S. B.

    2015-12-01

    Great Basin bristlecone pine (Pinus longaeva) is a valuable paleoclimate resource due to the climatic sensitivity of its annually-resolved rings. Recent work has shown that low growing season temperatures limit tree growth at the upper treeline ecotone. The presence of precisely dated remnant wood above modern treeline shows that this ecotone shifts at centennial timescales; in some areas during the Holocene climatic optimum treeline was 100 m higher than at present. A recent model from Paulsen and Körner (2014, doi:10.1007/s00035-014-0124-0) predicts global potential treeline position as a function of climate. The model develops three parameters necessary to sustain a temperature-limited treeline; a growing season longer than 94 days, defined by all days with a mean temperature >0.9 °C, and a mean temperature of 6.4 °C across the entire growing season. While maintaining impressive global accuracy in treeline prediction, these parameters are not specific to the semi-arid Great Basin bristlecone pine treelines in Nevada. In this study, we used 49 temperature sensors arrayed across approximately one square kilometer of complex terrain at treeline on Mount Washington to model temperatures using topographic indices. Results show relatively accurate prediction throughout the growing season (e.g., July average daily temperatures were modeled with an R2 of 0.80 and an RMSE of 0.29 °C). The modeled temperatures enabled calibration of a regional treeline model, yielding different parameters needed to predict potential treeline than the global model. Preliminary results indicate that modern Bristlecone pine treeline on and around Mount Washington occurs in areas with a longer growing season length (~160 days defined by all days with a mean temperature >0.9 °C) and a warmer seasonal mean temperature (~9 °C) than the global average. This work will provide a baseline data set on treeline position in the Snake Range derived only from parameters physiologically relevant to demography, and may assist in understanding climate refugia for this species.

  5. Work-related accidents among the Iranian population: a time series analysis, 2000–2011

    PubMed Central

    Karimlou, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood

    2015-01-01

    Background Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. Objectives To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. Methods In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box–Jenkins modeling to develop a time series model of the total number of accidents. Results There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). Conclusions The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection. PMID:26119774

  6. Work-related accidents among the Iranian population: a time series analysis, 2000-2011.

    PubMed

    Karimlou, Masoud; Salehi, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood

    2015-01-01

    Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box-Jenkins modeling to develop a time series model of the total number of accidents. There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection.

  7. The use of the logistic model in space motion sickness prediction

    NASA Technical Reports Server (NTRS)

    Lin, Karl K.; Reschke, Millard F.

    1987-01-01

    The one-equation and the two-equation logistic models were used to predict subjects' susceptibility to motion sickness in KC-135 parabolic flights using data from other ground-based motion sickness tests. The results show that the logistic models correctly predicted substantially more cases (an average of 13 percent) in the data subset used for model building. Overall, the logistic models ranged from 53 to 65 percent predictions of the three endpoint parameters, whereas the Bayes linear discriminant procedure ranged from 48 to 65 percent correct for the cross validation sample.

  8. Geohydrology and simulation of ground-water flow in the aquifer system near Calvert City, Kentucky

    USGS Publications Warehouse

    Starn, J.J.; Arihood, L.D.; Rose, M.F.

    1995-01-01

    The U.S. Geological Survey, in cooperation with the Kentucky Natural Resources and Environmental Protection Cabinet, constructed a two-dimensional, steady-state ground-water-flow model to estimate hydraulic properties, contributing areas to discharge boundaries, and the average linear velocity at selected locations in an aquifer system near Calvert City, Ky. Nonlinear regression was used to estimate values of model parameters and the reliability of the parameter estimates. The regression minimizes the weighted difference between observed and calculated hydraulic heads and rates of flow. The calibrated model generally was better than alternative models considered, and although adding transmissive faults in the bedrock produced a slightly better model, fault transmissivity was not estimated reliably. The average transmissivity of the aquifer was 20,000 feet squared per day. Recharge to two outcrop areas, the McNairy Formation of Cretaceous age and the alluvium of Quaternary age, were 0.00269 feet per day (11.8 inches per year) and 0.000484 feet per day (2.1 inches per year), respectively. Contributing areas to wells at the Calvert City Water Company in 1992 did not include the Calvert City Industrial Complex. Since completing the fieldwork for this study in 1992, the Calvert City Water Company discontinued use of their wells and began withdrawing water from new wells that were located 4.5 miles east-southeast of the previous location; the contributing area moved farther from the industrial complex. The extent of the alluvium contributing water to wells was limited by the overlying lacustrine deposits. The average linear ground-water velocity at the industrial complex ranged from 0.90 feet per day to 4.47 feet per day with a mean of 1.98 feet per day.

  9. Stiffness of the endplate boundary layer and endplate surface topography are associated with brittleness of human whole vertebral bodies

    PubMed Central

    Nekkanty, Srikant; Yerramshetty, Janardhan; Kim, Do-Gyoon; Zauel, Roger; Johnson, Evan; Cody, Dianna D.; Yeni, Yener N.

    2013-01-01

    Stress magnitude and variability as estimated from large scale finite element (FE) analyses have been associated with compressive strength of human vertebral cancellous cores but these relationships have not been explored for whole vertebral bodies. In this study, the objectives were to investigate the relationship of FE-calculated stress distribution parameters with experimentally determined strength, stiffness, and displacement based ductility measures in human whole vertebral bodies, investigate the effect of endplate loading conditions on vertebral stiffness, strength, and ductility and test the hypothesis that endplate topography affects vertebral ductility and stress distributions. Eighteen vertebral bodies (T6-L3 levels; 4 female and 5 male cadavers, aged 40-98 years) were scanned using a flat panel CT system and followed with axial compression testing with Wood’s metal as filler material to maintain flat boundaries between load plates and specimens. FE models were constructed using reconstructed CT images and filler material was added digitally. Two different FE models with different filler material modulus simulating Wood’s metal and intervertebral disc (W-layer and D-layer models) were used. Element material modulus to cancellous bone was based on image gray value. Average, standard deviation, and coefficient of variation of von Mises stress in vertebral bone for W-layer and D-layer models and also the ratios of FE parameters from the two models (W/D) were calculated. Inferior and superior endplate surface topographical distribution parameters were calculated. Experimental stiffness, maximum load and work to fracture had the highest correlation with FE-calculated stiffness while experimental ductility measures had highest correlations with FE-calculated average von Mises stress and W-layer to D-layer stiffness ratio. Endplate topography of the vertebra was also associated with its structural ductility and the distribution parameter that best explained this association was kurtosis of inferior endplate topography. Our results indicate that endplate topography variations may provide insight into the mechanisms responsible for vertebral fractures. PMID:20633709

  10. On the structural properties of small-world networks with range-limited shortcut links

    NASA Astrophysics Data System (ADS)

    Jia, Tao; Kulkarni, Rahul V.

    2013-12-01

    We explore a new variant of Small-World Networks (SWNs), in which an additional parameter (r) sets the length scale over which shortcuts are uniformly distributed. When r=0 we have an ordered network, whereas r=1 corresponds to the original Watts-Strogatz SWN model. These limited range SWNs have a similar degree distribution and scaling properties as the original SWN model. We observe the small-world phenomenon for r≪1, indicating that global shortcuts are not necessary for the small-world effect. For limited range SWNs, the average path length changes nonmonotonically with system size, whereas for the original SWN model it increases monotonically. We propose an expression for the average path length for limited range SWNs based on numerical simulations and analytical approximations.

  11. Parameterization of a mesoscopic model for the self-assembly of linear sodium alkyl sulfates

    NASA Astrophysics Data System (ADS)

    Mai, Zhaohuan; Couallier, Estelle; Rakib, Mohammed; Rousseau, Bernard

    2014-05-01

    A systematic approach to develop mesoscopic models for a series of linear anionic surfactants (CH3(CH2)n - 1OSO3Na, n = 6, 9, 12, 15) by dissipative particle dynamics (DPD) simulations is presented in this work. The four surfactants are represented by coarse-grained models composed of the same head group and different numbers of identical tail beads. The transferability of the DPD model over different surfactant systems is carefully checked by adjusting the repulsive interaction parameters and the rigidity of surfactant molecules, in order to reproduce key equilibrium properties of the aqueous micellar solutions observed experimentally, including critical micelle concentration (CMC) and average micelle aggregation number (Nag). We find that the chain length is a good index to optimize the parameters and evaluate the transferability of the DPD model. Our models qualitatively reproduce the essential properties of these surfactant analogues with a set of best-fit parameters. It is observed that the logarithm of the CMC value decreases linearly with the surfactant chain length, in agreement with Klevens' rule. With the best-fit and transferable set of parameters, we have been able to calculate the free energy contribution to micelle formation per methylene unit of -1.7 kJ/mol, very close to the experimentally reported value.

  12. Determination of morphological parameters of biological cells by analysis of scattered-light distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burger, D.E.

    1979-11-01

    The extraction of morphological parameters from biological cells by analysis of light-scatter patterns is described. A light-scattering measurement system has been designed and constructed that allows one to visually examine and photographically record biological cells or cell models and measure the light-scatter pattern of an individual cell or cell model. Using a laser or conventional illumination, the imaging system consists of a modified microscope with a 35 mm camera attached to record the cell image or light-scatter pattern. Models of biological cells were fabricated. The dynamic range and angular distributions of light scattered from these models was compared to calculatedmore » distributions. Spectrum analysis techniques applied on the light-scatter data give the sought after morphological cell parameters. These results compared favorably to shape parameters of the fabricated cell models confirming the mathematical model procedure. For nucleated biological material, correct nuclear and cell eccentricity as well as the nuclear and cytoplasmic diameters were determined. A method for comparing the flow equivalent of nuclear and cytoplasmic size to the actual dimensions is shown. This light-scattering experiment provides baseline information for automated cytology. In its present application, it involves correlating average size as measured in flow cytology to the actual dimensions determined from this technique. (ERB)« less

  13. Noisy coupled logistic maps in the vicinity of chaos threshold.

    PubMed

    Tirnakli, Ugur; Tsallis, Constantino

    2016-04-01

    We focus on a linear chain of N first-neighbor-coupled logistic maps in the vicinity of their edge of chaos in the presence of a common noise. This model, characterised by the coupling strength ϵ and the noise width σmax, was recently introduced by Pluchino et al. [Phys. Rev. E 87, 022910 (2013)]. They detected, for the time averaged returns with characteristic return time τ, possible connections with q-Gaussians, the distributions which optimise, under appropriate constraints, the nonadditive entropy, Sq, basis of nonextensive statistics mechanics. Here, we take a closer look on this model, and numerically obtain probability distributions which exhibit a slight asymmetry for some parameter values, in variance with simple q-Gaussians. Nevertheless, along many decades, the fitting with q-Gaussians turns out to be numerically very satisfactory for wide regions of the parameter values, and we illustrate how the index q evolves with (N,τ,ϵ,σmax). It is nevertheless instructive on how careful one must be in such numerical analysis. The overall work shows that physical and/or biological systems that are correctly mimicked by this model are thermostatistically related to nonextensive statistical mechanics when time-averaged relevant quantities are studied.

  14. Modelling audiovisual integration of affect from videos and music.

    PubMed

    Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V

    2018-05-01

    Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.

  15. Noisy coupled logistic maps in the vicinity of chaos threshold

    NASA Astrophysics Data System (ADS)

    Tirnakli, Ugur; Tsallis, Constantino

    2016-04-01

    We focus on a linear chain of N first-neighbor-coupled logistic maps in the vicinity of their edge of chaos in the presence of a common noise. This model, characterised by the coupling strength ɛ and the noise width σmax, was recently introduced by Pluchino et al. [Phys. Rev. E 87, 022910 (2013)]. They detected, for the time averaged returns with characteristic return time τ, possible connections with q-Gaussians, the distributions which optimise, under appropriate constraints, the nonadditive entropy, Sq, basis of nonextensive statistics mechanics. Here, we take a closer look on this model, and numerically obtain probability distributions which exhibit a slight asymmetry for some parameter values, in variance with simple q-Gaussians. Nevertheless, along many decades, the fitting with q-Gaussians turns out to be numerically very satisfactory for wide regions of the parameter values, and we illustrate how the index q evolves with ( N , τ , ɛ , σ m a x ) . It is nevertheless instructive on how careful one must be in such numerical analysis. The overall work shows that physical and/or biological systems that are correctly mimicked by this model are thermostatistically related to nonextensive statistical mechanics when time-averaged relevant quantities are studied.

  16. On the of neural modeling of some dynamic parameters of earthquakes and fire safety in high-rise construction

    NASA Astrophysics Data System (ADS)

    Haritonova, Larisa

    2018-03-01

    The recent change in the correlation of the number of man-made and natural catastrophes is presented in the paper. Some recommendations are proposed to increase the firefighting efficiency in the high-rise buildings. The article analyzes the methodology of modeling seismic effects. The prospectivity of applying the neural modeling and artificial neural networks to analyze a such dynamic parameters of the earthquake foci as the value of dislocation (or the average rupture slip) is shown. The following two input signals were used: the power class and the number of earthquakes. The regression analysis has been carried out for the predicted results and the target outputs. The equations of the regression for the outputs and target are presented in the work as well as the correlation coefficients in training, validation, testing, and the total (All) for the network structure 2-5-5-1for the average rupture slip. The application of the results obtained in the article for the seismic design for the newly constructed buildings and structures and the given recommendations will provide the additional protection from fire and earthquake risks, reduction of their negative economic and environmental consequences.

  17. Influences of source condition and dissolution on bubble plume in a stratified environment

    NASA Astrophysics Data System (ADS)

    Chu, Shigan; Prosperetti, Andrea

    2017-11-01

    A cross-sectionally averaged model is used to study a bubble plume rising in a stratified quiescent liquid. Scaling analyses for the peel height, at which the plume momentum vanishes, and the neutral height, at which its average density equals the ambient density, are presented. Contrary to a widespread practice in the literature, it is argued that the neutral height cannot be identified with the experimentally reported intrusion height. Recognizing this difference provides an explanation of the reason why the intrusion height is found so frequently to lie so much above predictions, and brings the theoretical results in line with observations. The mathematical model depends on three dimensionless parameters, some of which are related to the inlet conditions at the plume source. Their influence on the peel and neutral heights is illustrated by means of numerical results. Aside from the source parameters, we incorporate dissolution of bubbles and the corresponding density change of plume into the model. Contrary to what's documented in literature, density change of plume due to dissolution plays an important role in keeping the total buoyancy of plume, thus alleviating the rapid decrease of peel height because of dissolution.

  18. QSAR analysis for nano-sized layered manganese-calcium oxide in water oxidation: An application of chemometric methods in artificial photosynthesis.

    PubMed

    Shahbazy, Mohammad; Kompany-Zareh, Mohsen; Najafpour, Mohammad Mahdi

    2015-11-01

    Water oxidation is among the most important reactions in artificial photosynthesis, and nano-sized layered manganese-calcium oxides are efficient catalysts toward this reaction. Herein, a quantitative structure-activity relationship (QSAR) model was constructed to predict the catalytic activities of twenty manganese-calcium oxides toward water oxidation using multiple linear regression (MLR) and genetic algorithm (GA) for multivariate calibration and feature selection, respectively. Although there are eight controlled parameters during synthesizing of the desired catalysts including ripening time, temperature, manganese content, calcium content, potassium content, the ratio of calcium:manganese, the average manganese oxidation state and the surface of catalyst, by using GA only three of them (potassium content, the ratio of calcium:manganese and the average manganese oxidation state) were selected as the most effective parameters on catalytic activities of these compounds. The model's accuracy criteria such as R(2)test and Q(2)test in order to predict catalytic rate for external test set experiments; were equal to 0.941 and 0.906, respectively. Therefore, model reveals acceptable capability to anticipate the catalytic activity. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Estimation of Community Land Model parameters for an improved assessment of net carbon fluxes at European sites

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan

    2017-03-01

    The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.

  20. Optimization of terrestrial ecosystem model parameters using atmospheric CO2 concentration data with a global carbon assimilation system (GCAS)

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Chen, J.; Zhang, S.; Zheng, X.; Shangguan, W.

    2016-12-01

    A global carbon assimilation system (GCAS) that assimilates ground-based atmospheric CO2 data is used to estimate several key parameters in a terrestrial ecosystem model for the purpose of improving carbon cycle simulation. The optimized parameters are the leaf maximum carboxylation rate at 25° (Vmax25 ), the temperature sensitivity of ecosystem respiration (Q10), and the soil carbon pool size. The optimization is performed at the global scale at 1°resolution for the period from 2002 to 2008. Optimized multi-year average Vmax25 values range from 49 to 51 μmol m-2 s-1 over most regions of world. Vegetation from tropical zones has relatively lower values than vegetation in temperate regions. Optimized multi-year average Q10 values varied from 1.95 to 2.05 over most regions of the world. Relatively high values of Q10 are derived over high/mid latitude regions. Both Vmax25 and Q10 exhibit pronounced seasonal variations at mid-high latitudes. The maximum in occurs during the growing season, while the minima appear during non-growing seasons. Q10 values decreases with increasing temperature. The seasonal variabilities of and Q10 are larger at higher latitudes with tropical or low latitude regions showing little seasonal variabilities.

  1. The Constitutive Modeling of Thin Films with Randon Material Wrinkles

    NASA Technical Reports Server (NTRS)

    Murphey, Thomas W.; Mikulas, Martin M.

    2001-01-01

    Material wrinkles drastically alter the structural constitutive properties of thin films. Normally linear elastic materials, when wrinkled, become highly nonlinear and initially inelastic. Stiffness' reduced by 99% and negative Poisson's ratios are typically observed. This paper presents an effective continuum constitutive model for the elastic effects of material wrinkles in thin films. The model considers general two-dimensional stress and strain states (simultaneous bi-axial and shear stress/strain) and neglects out of plane bending. The constitutive model is derived from a traditional mechanics analysis of an idealized physical model of random material wrinkles. Model parameters are the directly measurable wrinkle characteristics of amplitude and wavelength. For these reasons, the equations are mechanistic and deterministic. The model is compared with bi-axial tensile test data for wrinkled Kaptong(Registered Trademark) HN and is shown to deterministically predict strain as a function of stress with an average RMS error of 22%. On average, fitting the model to test data yields an RMS error of 1.2%

  2. On application of asymmetric Kan-like exact equilibria to the Earth magnetotail modeling

    NASA Astrophysics Data System (ADS)

    Korovinskiy, Daniil B.; Kubyshkina, Darya I.; Semenov, Vladimir S.; Kubyshkina, Marina V.; Erkaev, Nikolai V.; Kiehas, Stefan A.

    2018-04-01

    A specific class of solutions of the Vlasov-Maxwell equations, developed by means of generalization of the well-known Harris-Fadeev-Kan-Manankova family of exact two-dimensional equilibria, is studied. The examined model reproduces the current sheet bending and shifting in the vertical plane, arising from the Earth dipole tilting and the solar wind nonradial propagation. The generalized model allows magnetic configurations with equatorial magnetic fields decreasing in a tailward direction as slow as 1/x, contrary to the original Kan model (1/x3); magnetic configurations with a single X point are also available. The analytical solution is compared with the empirical T96 model in terms of the magnetic flux tube volume. It is found that parameters of the analytical model may be adjusted to fit a wide range of averaged magnetotail configurations. The best agreement between analytical and empirical models is obtained for the midtail at distances beyond 10-15 RE at high levels of magnetospheric activity. The essential model parameters (current sheet scale, current density) are compared to Cluster data of magnetotail crossings. The best match of parameters is found for single-peaked current sheets with medium values of number density, proton temperature and drift velocity.

  3. Incorporating measurement error in n = 1 psychological autoregressive modeling

    PubMed Central

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  4. The Cold Land Processes Experiment (CLPX-1): Analysis and Modelling of LSOS Data (IOP3 Period)

    NASA Technical Reports Server (NTRS)

    Tedesco, Marco; Kim, Edward J.; Cline, Don; Graf, Tobias; Koike, Toshio; Hardy, Janet; Armstrong, Richard; Brodzik, Mary

    2004-01-01

    Microwave brightness temperatures at 18.7,36.5, and 89 GHz collected at the Local-Scale Observation Site (LSOS) of the NASA Cold-Land Processes Field Experiment in February, 2003 (third Intensive Observation Period) were simulated using a Dense Media Radiative Transfer model (DMRT), based on the Quasi Crystalline Approximation with Coherent Potential (QCA-CP). Inputs to the model were averaged from LSOS snow pit measurements, although different averages were used for the lower frequencies vs. the highest one, due to the different penetration depths and to the stratigraphy of the snowpack. Mean snow particle radius was computed as a best-fit parameter. Results show that the model was able to reproduce satisfactorily brightness temperatures measured by the University of Tokyo s Ground Based Microwave Radiometer system (CBMR-7). The values of the best-fit snow particle radii were found to fall within the range of values obtained by averaging the field-measured mean particle sizes for the three classes of Small, Medium and Large grain sizes measured at the LSOS site.

  5. Improvement of shallow landslide prediction accuracy using soil parameterisation for a granite area in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, M. S.; Onda, Y.; Kim, J. K.

    2015-01-01

    SHALSTAB model applied to shallow landslides induced by rainfall to evaluate soil properties related with the effect of soil depth for a granite area in Jinbu region, Republic of Korea. Soil depth measured by a knocking pole test and two soil parameters from direct shear test (a and b) as well as one soil parameters from a triaxial compression test (c) were collected to determine the input parameters for the model. Experimental soil data were used for the first simulation (Case I) and, soil data represented the effect of measured soil depth and average soil depth from soil data of Case I were used in the second (Case II) and third simulations (Case III), respectively. All simulations were analysed using receiver operating characteristic (ROC) analysis to determine the accuracy of prediction. ROC analysis results for first simulation showed the low ROC values under 0.75 may be due to the internal friction angle and particularly the cohesion value. Soil parameters calculated from a stochastic hydro-geomorphological model were applied to the SHALSTAB model. The accuracy of Case II and Case III using ROC analysis showed higher accuracy values rather than first simulation. Our results clearly demonstrate that the accuracy of shallow landslide prediction can be improved when soil parameters represented the effect of soil thickness.

  6. Moisture Damage Modeling in Lime and Chemically Modified Asphalt at Nanolevel Using Ensemble Computational Intelligence

    PubMed Central

    2018-01-01

    This paper measures the adhesion/cohesion force among asphalt molecules at nanoscale level using an Atomic Force Microscopy (AFM) and models the moisture damage by applying state-of-the-art Computational Intelligence (CI) techniques (e.g., artificial neural network (ANN), support vector regression (SVR), and an Adaptive Neuro Fuzzy Inference System (ANFIS)). Various combinations of lime and chemicals as well as dry and wet environments are used to produce different asphalt samples. The parameters that were varied to generate different asphalt samples and measure the corresponding adhesion/cohesion forces are percentage of antistripping agents (e.g., Lime and Unichem), AFM tips K values, and AFM tip types. The CI methods are trained to model the adhesion/cohesion forces given the variation in values of the above parameters. To achieve enhanced performance, the statistical methods such as average, weighted average, and regression of the outputs generated by the CI techniques are used. The experimental results show that, of the three individual CI methods, ANN can model moisture damage to lime- and chemically modified asphalt better than the other two CI techniques for both wet and dry conditions. Moreover, the ensemble of CI along with statistical measurement provides better accuracy than any of the individual CI techniques. PMID:29849551

  7. Evaluation and modification of five techniques for estimating stormwater runoff for watersheds in west-central Florida

    USGS Publications Warehouse

    Trommer, J.T.; Loper, J.E.; Hammett, K.M.

    1996-01-01

    Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea

  8. Acid base properties of cyanobacterial surfaces I: Influences of growth phase and nitrogen metabolism on cell surface reactivity

    NASA Astrophysics Data System (ADS)

    Lalonde, S. V.; Smith, D. S.; Owttrim, G. W.; Konhauser, K. O.

    2008-03-01

    Significant efforts have been made to elucidate the chemical properties of bacterial surfaces for the purposes of refining surface complexation models that can account for their metal sorptive behavior under diverse conditions. However, the influence of culturing conditions on surface chemical parameters that are modeled from the potentiometric titration of bacterial surfaces has received little regard. While culture age and metabolic pathway have been considered as factors potentially influencing cell surface reactivity, statistical treatments have been incomplete and variability has remained unconfirmed. In this study, we employ potentiometric titrations to evaluate variations in bacterial surface ligand distributions using live cells of the sheathless cyanobacterium Anabaena sp. strain PCC 7120, grown under a variety of batch culture conditions. We evaluate the ability for a single set of modeled parameters, describing acid-base surface properties averaged over all culture conditions tested, to accurately account for the ligand distributions modeled for each individual culture condition. In addition to considering growth phase, we assess the role of the various assimilatory nitrogen metabolisms available to this organism as potential determinants of surface reactivity. We observe statistically significant variability in site distribution between the majority of conditions assessed. By employing post hoc Tukey-Kramer analysis for all possible pair-wise condition comparisons, we conclude that the average parameters are inadequate for the accurate chemical description of this cyanobacterial surface. It was determined that for this Gram-negative bacterium in batch culture, ligand distributions were influenced to a greater extent by nitrogen assimilation pathway than by growth phase.

  9. General molecular mechanics method for transition metal carboxylates and its application to the multiple coordination modes in mono- and dinuclear Mn(II) complexes.

    PubMed

    Deeth, Robert J

    2008-08-04

    A general molecular mechanics method is presented for modeling the symmetric bidentate, asymmetric bidentate, and bridging modes of metal-carboxylates with a single parameter set by using a double-minimum M-O-C angle-bending potential. The method is implemented within the Molecular Operating Environment (MOE) with parameters based on the Merck molecular force field although, with suitable modifications, other MM packages and force fields could easily be used. Parameters for high-spin d (5) manganese(II) bound to carboxylate and water plus amine, pyridyl, imidazolyl, and pyrazolyl donors are developed based on 26 mononuclear and 29 dinuclear crystallographically characterized complexes. The average rmsd for Mn-L distances is 0.08 A, which is comparable to the experimental uncertainty required to cover multiple binding modes, and the average rmsd in heavy atom positions is around 0.5 A. In all cases, whatever binding mode is reported is also computed to be a stable local minimum. In addition, the structure-based parametrization implicitly captures the energetics and gives the same relative energies of symmetric and asymmetric coordination modes as density functional theory calculations in model and "real" complexes. Molecular dynamics simulations show that carboxylate rotation is favored over "flipping" while a stochastic search algorithm is described for randomly searching conformational space. The model reproduces Mn-Mn distances in dinuclear systems especially accurately, and this feature is employed to illustrate how MM calculations on models for the dimanganese active site of methionine aminopeptidase can help determine some of the details which may be missing from the experimental structure.

  10. The effect of increase in dielectric values on specific absorption rate (SAR) in eye and head tissues following 900, 1800 and 2450 MHz radio frequency (RF) exposure

    NASA Astrophysics Data System (ADS)

    Keshvari, Jafar; Keshvari, Rahim; Lang, Sakari

    2006-03-01

    Numerous studies have attempted to address the question of the RF energy absorption difference between children and adults using computational methods. They have assumed the same dielectric parameters for child and adult head models in SAR calculations. This has been criticized by many researchers who have stated that child organs are not fully developed, their anatomy is different and also their tissue composition is slightly different with higher water content. Higher water content would affect dielectric values, which in turn would have an effect on RF energy absorption. The objective of this study was to investigate possible variation in specific absorption rate (SAR) in the head region of children and adults by applying the finite-difference time-domain (FDTD) method and using anatomically correct child and adult head models. In the calculations, the conductivity and permittivity of all tissues were increased from 5 to 20% but using otherwise the same exposure conditions. A half-wave dipole antenna was used as an exposure source to minimize the uncertainties of the positioning of a real mobile device and making the simulations easily replicable. Common mobile telephony frequencies of 900, 1800 and 2450 MHz were used in this study. The exposures of ear and eye regions were investigated. The SARs of models with increased dielectric values were compared to the SARs of the models where dielectric values were unchanged. The analyses suggest that increasing the value of dielectric parameters does not necessarily mean that volume-averaged SAR would increase. Under many exposure conditions, specifically at higher frequencies in eye exposure, volume-averaged SAR decreases. An increase of up to 20% in dielectric conductivity or both conductivity and permittivity always caused a SAR variation of less than 20%, usually about 5%, when it was averaged over 1, 5 or 10 g of cubic mass for all models. The thickness and composition of different tissue layers in the exposed regions within the human head play a more significant role in SAR variation compared to the variations (5-20%) of the tissue dielectric parameters.

  11. Acoustic energy relations in Mudejar-Gothic churches.

    PubMed

    Zamarreño, Teófilo; Girón, Sara; Galindo, Miguel

    2007-01-01

    Extensive objective energy-based parameters have been measured in 12 Mudejar-Gothic churches in the south of Spain. Measurements took place in unoccupied churches according to the ISO-3382 standard. Monoaural objective measures in the 125-4000 Hz frequency range and in their spatial distributions were obtained. Acoustic parameters: clarity C80, definition D50, sound strength G and center time Ts have been deduced using impulse response analysis through a maximum length sequence measurement system in each church. These parameters spectrally averaged according to the most extended criteria in auditoria in order to consider acoustic quality were studied as a function of source-receiver distance. The experimental results were compared with predictions given by classical and other existing theoretical models proposed for concert halls and churches. An analytical semi-empirical model based on the measured values of the C80 parameter is proposed in this work for these spaces. The good agreement between predicted values and experimental data for definition, sound strength, and center time in the churches analyzed shows that the model can be used for design predictions and other purposes with reasonable accuracy.

  12. Spatially distributed groundwater recharge estimated using a water-budget model for the Island of Maui, Hawai`i, 1978–2007

    USGS Publications Warehouse

    Johnson, Adam G.; Engott, John A.; Bassiouni, Maoya; Rotzoll, Kolja

    2014-12-14

    Demand for freshwater on the Island of Maui is expected to grow. To evaluate the availability of fresh groundwater, estimates of groundwater recharge are needed. A water-budget model with a daily computation interval was developed and used to estimate the spatial distribution of recharge on Maui for average climate conditions (1978–2007 rainfall and 2010 land cover) and for drought conditions (1998–2002 rainfall and 2010 land cover). For average climate conditions, mean annual recharge for Maui is about 1,309 million gallons per day, or about 44 percent of precipitation (rainfall and fog interception). Recharge for average climate conditions is about 39 percent of total water inflow consisting of precipitation, irrigation, septic leachate, and seepage from reservoirs and cesspools. Most recharge occurs on the wet, windward slopes of Haleakalā and on the wet, uplands of West Maui Mountain. Dry, coastal areas generally have low recharge. In the dry isthmus, however, irrigated fields have greater recharge than nearby unirrigated areas. For drought conditions, mean annual recharge for Maui is about 1,010 million gallons per day, which is 23 percent less than recharge for average climate conditions. For individual aquifer-system areas used for groundwater management, recharge for drought conditions is about 8 to 51 percent less than recharge for average climate conditions. The spatial distribution of rainfall is the primary factor determining spatially distributed recharge estimates for most areas on Maui. In wet areas, recharge estimates are also sensitive to water-budget parameters that are related to runoff, fog interception, and forest-canopy evaporation. In dry areas, recharge estimates are most sensitive to irrigated crop areas and parameters related to evapotranspiration.

  13. Hydromagnetic couple-stress nanofluid flow over a moving convective wall: OHAM analysis

    NASA Astrophysics Data System (ADS)

    Awais, M.; Saleem, S.; Hayat, T.; Irum, S.

    2016-12-01

    This communication presents the magnetohydrodynamics (MHD) flow of a couple-stress nanofluid over a convective moving wall. The flow dynamics are analyzed in the boundary layer region. Convective cooling phenomenon combined with thermophoresis and Brownian motion effects has been discussed. Similarity transforms are utilized to convert the system of partial differential equations into coupled non-linear ordinary differential equation. Optimal homotopy analysis method (OHAM) is utilized and the concept of minimization is employed by defining the average squared residual errors. Effects of couple-stress parameter, convective cooling process parameter and energy enhancement parameters are displayed via graphs and discussed in detail. Various tables are also constructed to present the error analysis and a comparison of obtained results with the already published data. Stream lines are plotted showing a difference of Newtonian fluid model and couplestress fluid model.

  14. Dependence of subject-specific parameters for a fast helical CT respiratory motion model on breathing rate: an animal study

    NASA Astrophysics Data System (ADS)

    O'Connell, Dylan; Thomas, David H.; Lamb, James M.; Lewis, John H.; Dou, Tai; Sieren, Jered P.; Saylor, Melissa; Hofmann, Christian; Hoffman, Eric A.; Lee, Percy P.; Low, Daniel A.

    2018-02-01

    To determine if the parameters relating lung tissue displacement to a breathing surrogate signal in a previously published respiratory motion model vary with the rate of breathing during image acquisition. An anesthetized pig was imaged using multiple fast helical scans to sample the breathing cycle with simultaneous surrogate monitoring. Three datasets were collected while the animal was mechanically ventilated with different respiratory rates: 12 bpm (breaths per minute), 17 bpm, and 24 bpm. Three sets of motion model parameters describing the correspondences between surrogate signals and tissue displacements were determined. The model error was calculated individually for each dataset, as well asfor pairs of parameters and surrogate signals from different experiments. The values of one model parameter, a vector field denoted α which related tissue displacement to surrogate amplitude, determined for each experiment were compared. The mean model error of the three datasets was 1.00  ±  0.36 mm with a 95th percentile value of 1.69 mm. The mean error computed from all combinations of parameters and surrogate signals from different datasets was 1.14  ±  0.42 mm with a 95th percentile of 1.95 mm. The mean difference in α over all pairs of experiments was 4.7%  ±  5.4%, and the 95th percentile was 16.8%. The mean angle between pairs of α was 5.0  ±  4.0 degrees, with a 95th percentile of 13.2 mm. The motion model parameters were largely unaffected by changes in the breathing rate during image acquisition. The mean error associated with mismatched sets of parameters and surrogate signals was 0.14 mm greater than the error achieved when using parameters and surrogate signals acquired with the same breathing rate, while maximum respiratory motion was 23.23 mm on average.

  15. Surface energy balance estimates at local and regional scales using optical remote sensing from an aircraft platform and atmospheric data collected over semiarid rangelands

    USGS Publications Warehouse

    Kustas, William P.; Moran, M.S.; Humes, K.S.; Stannard, D.I.; Pinter, P. J.; Hipps, L.E.; Swiatek, E.; Goodrich, D.C.

    1994-01-01

    Remotely sensed data in the visible, near-infrared, and thermal-infrared wave bands were collected from a low-flying aircraft during the Monsoon '90 field experiment. Monsoon '90 was a multidisciplinary experiment conducted in a semiarid watershed. It had as one of its objectives the quantification of hydrometeorological fluxes during the “monsoon” or wet season. The remote sensing observations along with micrometeprological and atmospheric boundary layer (ABL) data were used to compute the surface energy balance over a range of spatial scales. The procedure involved averaging multiple pixels along transects flown over the meteorological and flux (METFLUX) stations. Average values of the spectral reflectance and thermal-infrared temperatures were computed for pixels of order 10−1 to 101 km in length and were used with atmospheric data for evaluating net radiation (Rn), soil heat flux (G), and sensible (H) and latent (LE) heat fluxes at these same length scales. The model employs a single-layer resistance approach for estimating H that requires wind speed and air temperature in the ABL and a remotely sensed surface temperature. The values of Rn and G are estimated from remote sensing information together with near-surface observations of air temperature, relative humidity, and solar radiation. Finally, LE is solved as the residual term in the surface energy balance equation. Model calculations were compared to measurements from the METFLUX network for three days having different environmental conditions. Average percent differences for the three days between model and the METFLUX estimates of the local fluxes were about 5% for Rn, 20% for Gand H, and 15% for LE. Larger differences occurred during partly cloudy conditions because of errors in interpreting the remote sensing data and the higher spatial and temporal variation in the energy fluxes. Minor variations in modeled energy fluxes were observed when the pixel size representing the remote sensing inputs changed from 0.2 to 2 km. Regional scale estimates of the surface energy balance using bulk ABL properties for the model parameters and input variables and the 10-km pixel data differed from the METFLUX network averages by about 4% for Rn, 10% for G and H, and 15% for LE. Model sensitivity in calculating the turbulent fluxes H and LE to possible variations in key model parameters (i.e., the roughness lengths for heat and momentum) was found to be fairly significant. Therefore the reliability of the methods for estimating key model parameters and potential errors needs further testing over different ecosystems and environmental conditions.

  16. Molecular-dynamics simulation of mutual diffusion in nonideal liquid mixtures

    NASA Astrophysics Data System (ADS)

    Rowley, R. L.; Stoker, J. M.; Giles, N. F.

    1991-05-01

    The mutual-diffusion coefficients, D 12, of n-hexane, n-heptane, and n-octane in chloroform were modeled using equilibrium molecular-dynamics (MD) simulations of simple Lennard-Jones (LJ) fluids. Pure-component LJ parameters were obtained by comparison of simulations to experimental self-diffusion coefficients. While values of “effective” LJ parameters are not expected to simulate accurately diverse thermophysical properties over a wide range of conditions, it was recently shown that effective parameters obtained from pure self-diffusion coefficients can accurately model mutual diffusion in ideal, liquid mixtures. In this work, similar simulations are used to model diffusion in nonideal mixtures. The same combining rules used in the previous study for the cross-interaction parameters were found to be adequate to represent the composition dependence of D 12. The effect of alkane chain length on D 12 is also correctly predicted by the simulations. A commonly used assumption in empirical correlations of D 12, that its kinetic portion is a simple, compositional average of the intradiffusion coefficients, is inconsistent with the simulation results. In fact, the value of the kinetic portion of D 12 was often outside the range of values bracketed by the two intradiffusion coefficients for the nonideal system modeled here.

  17. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  18. Parafoveal Target Detectability Reversal Predicted by Local Luminance and Contrast Gain Control

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    This project is part of a program to develop image discrimination models for the prediction of the detectability of objects in a range of backgrounds. We wanted to see if the models could predict parafoveal object detection as well as they predict detection in foveal vision. We also wanted to make our simplified models more general by local computation of luminance and contrast gain control. A signal image (0.78 x 0.17 deg) was made by subtracting a simulated airport runway scene background image (2.7 deg square) from the same scene containing an obstructing aircraft. Signal visibility contrast thresholds were measured in a fully crossed factorial design with three factors: eccentricity (0 deg or 4 deg), background (uniform or runway scene background), and fixed-pattern white noise contrast (0%, 5%, or 10%). Three experienced observers responded to three repetitions of 60 2IFC trials in each condition and thresholds were estimated by maximum likelihood probit analysis. In the fovea the average detection contrast threshold was 4 dB lower for the runway background than for the uniform background, but in the parafovea, the average threshold was 6 dB higher for the runway background than for the uniform background. This interaction was similar across the different noise levels and for all three observers. A likely reason for the runway background giving a lower threshold in the fovea is the low luminance near the signal in that scene. In our model, the local luminance computation is controlled by a spatial spread parameter. When this parameter and a corresponding parameter for the spatial spread of contrast gain were increased for the parafoveal predictions, the model predicts the interaction of background with eccentricity.

  19. Intercomparison of air quality data using principal component analysis, and forecasting of PM₁₀ and PM₂.₅ concentrations using artificial neural networks, in Thessaloniki and Helsinki.

    PubMed

    Voukantsis, Dimitris; Karatzas, Kostas; Kukkonen, Jaakko; Räsänen, Teemu; Karppinen, Ari; Kolehmainen, Mikko

    2011-03-01

    In this paper we propose a methodology consisting of specific computational intelligence methods, i.e. principal component analysis and artificial neural networks, in order to inter-compare air quality and meteorological data, and to forecast the concentration levels for environmental parameters of interest (air pollutants). We demonstrate these methods to data monitored in the urban areas of Thessaloniki and Helsinki in Greece and Finland, respectively. For this purpose, we applied the principal component analysis method in order to inter-compare the patterns of air pollution in the two selected cities. Then, we proceeded with the development of air quality forecasting models for both studied areas. On this basis, we formulated and employed a novel hybrid scheme in the selection process of input variables for the forecasting models, involving a combination of linear regression and artificial neural networks (multi-layer perceptron) models. The latter ones were used for the forecasting of the daily mean concentrations of PM₁₀ and PM₂.₅ for the next day. Results demonstrated an index of agreement between measured and modelled daily averaged PM₁₀ concentrations, between 0.80 and 0.85, while the kappa index for the forecasting of the daily averaged PM₁₀ concentrations reached 60% for both cities. Compared with previous corresponding studies, these statistical parameters indicate an improved performance of air quality parameters forecasting. It was also found that the performance of the models for the forecasting of the daily mean concentrations of PM₁₀ was not substantially different for both cities, despite the major differences of the two urban environments under consideration. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Simulation of semi-arid hydrological processes at different spatial resolutions using the AgroEcoSystem-Watershed (AgES-W) model

    NASA Astrophysics Data System (ADS)

    Green, T. R.; Erksine, R. H.; David, O.; Ascough, J. C., II; Kipka, H.; Lloyd, W. J.; McMaster, G. S.

    2015-12-01

    Water movement and storage within a watershed may be simulated at different spatial resolutions of land areas or hydrological response units (HRUs). Here, effects of HRU size on simulated soil water and surface runoff are tested using the AgroEcoSystem-Watershed (AgES-W) model with three different resolutions of HRUs. We studied a 56-ha agricultural watershed in northern Colorado, USA farmed primarily under a wheat-fallow rotation. The delineation algorithm was based upon topography (surface flow paths), land use (crop management strips and native grass), and mapped soil units (three types), which produced HRUs that follow the land use and soil boundaries. AgES-W model parameters that control surface and subsurface hydrology were calibrated using simulated daily soil moisture at different landscape positions and depths where soil moisture was measured hourly and averaged up to daily values. Parameter sets were both uniform and spatially variable with depth and across the watershed (5 different calibration approaches). Although forward simulations were computationally efficient (less than 1 minute each), each calibration required thousands of model runs. Execution of such large jobs was facilitated by using the Object Modeling System with the Cloud Services Innovation Platform to manage four virtual machines on a commercial web service configured with a total of 64 computational cores and 120 GB of memory. Results show how spatially distributed and averaged soil moisture and runoff at the outlet vary with different HRU delineations. The results will help guide HRU delineation, spatial resolution and parameter estimation methods for improved hydrological simulations in this and other semi-arid agricultural watersheds.

  1. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.

  2. First-order kinetic gas generation model parameters for wet landfills.

    PubMed

    Faour, Ayman A; Reinhart, Debra R; You, Huaxin

    2007-01-01

    Landfill gas collection data from wet landfill cells were analyzed and first-order gas generation model parameters were estimated for the US EPA landfill gas emissions model (LandGEM). Parameters were determined through statistical comparison of predicted and actual gas collection. The US EPA LandGEM model appeared to fit the data well, provided it is preceded by a lag phase, which on average was 1.5 years. The first-order reaction rate constant, k, and the methane generation potential, L(o), were estimated for a set of landfills with short-term waste placement and long-term gas collection data. Mean and 95% confidence parameter estimates for these data sets were found using mixed-effects model regression followed by bootstrap analysis. The mean values for the specific methane volume produced during the lag phase (V(sto)), L(o), and k were 33 m(3)/Megagrams (Mg), 76 m(3)/Mg, and 0.28 year(-1), respectively. Parameters were also estimated for three full scale wet landfills where waste was placed over many years. The k and L(o) estimated for these landfills were 0.21 year(-1), 115 m(3)/Mg, 0.11 year(-1), 95 m(3)/Mg, and 0.12 year(-1) and 87 m(3)/Mg, respectively. A group of data points from wet landfills cells with short-term data were also analyzed. A conservative set of parameter estimates was suggested based on the upper 95% confidence interval parameters as a k of 0.3 year(-1) and a L(o) of 100 m(3)/Mg if design is optimized and the lag is minimized.

  3. On Diffusive Climatological Models.

    NASA Astrophysics Data System (ADS)

    Griffel, D. H.; Drazin, P. G.

    1981-11-01

    A simple, zonally and annually averaged, energy-balance climatological model with diffusive heat transport and nonlinear albedo feedback is solved numerically. Some parameters of the model are varied, one by one, to find the resultant effects on the steady solution representing the climate. In particular, the outward radiation flux, the insulation distribution and the albedo parameterization are varied. We have found an accurate yet simple analytic expression for the mean annual insolation as a function of latitude and the obliquity of the Earth's rotation axis; this has enabled us to consider the effects of the oscillation of the obliquity. We have used a continuous albedo function which fits the observed values; it considerably reduces the sensitivity of the model. Climatic cycles, calculated by solving the time-dependent equation when parameters change slowly and periodically, are compared qualitatively with paleoclimatic records.

  4. Conservative Estimation of Whole-body Average SAR in Infant Model for 0.3-6GHz Far-Field Exposure

    NASA Astrophysics Data System (ADS)

    Hirata, Akimasa; Nagaya, Yoshio; Ito, Naoki; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi

    From an anatomically-based Japanese model of three-year-old child with a resolution of 1 mm, we developed a nine-month Japanese infant with linear shrink. With these models, we calculated the whole-body average specific absorption rate (WBA-SAR) for plane-wave exposure from 0.1 to 6 GHz. A conservative estimate of the WBA-SAR was also investigated by using three kinds of simple-shaped models: cuboid, ellipsoid and spheroid, whose parameters were determined based on the above three-year-old child model. As a result, the cuboid and ellipsoid were found to provide an overestimate of the WBA-SAR compared to the realistic model, whereas the spheroid does an underestimate. Based on these findings for different body models, we have specified the incident power density required to produce WBA-SAR of 0.08 W/kg, which is the basic restriction for public exposure in the guidelines of International Commission on Non-Ionizing Radiation Protection.

  5. Rapid calculation of accurate atomic charges for proteins via the electronegativity equalization method.

    PubMed

    Ionescu, Crina-Maria; Geidl, Stanislav; Svobodová Vařeková, Radka; Koča, Jaroslav

    2013-10-28

    We focused on the parametrization and evaluation of empirical models for fast and accurate calculation of conformationally dependent atomic charges in proteins. The models were based on the electronegativity equalization method (EEM), and the parametrization procedure was tailored to proteins. We used large protein fragments as reference structures and fitted the EEM model parameters using atomic charges computed by three population analyses (Mulliken, Natural, iterative Hirshfeld), at the Hartree-Fock level with two basis sets (6-31G*, 6-31G**) and in two environments (gas phase, implicit solvation). We parametrized and successfully validated 24 EEM models. When tested on insulin and ubiquitin, all models reproduced quantum mechanics level charges well and were consistent with respect to population analysis and basis set. Specifically, the models showed on average a correlation of 0.961, RMSD 0.097 e, and average absolute error per atom 0.072 e. The EEM models can be used with the freely available EEM implementation EEM_SOLVER.

  6. Hierarchical Bayesian Model Averaging for Non-Uniqueness and Uncertainty Analysis of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.

  7. Multistage degradation modeling for BLDC motor based on Wiener process

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai

    2018-05-01

    Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.

  8. Indirect and direct methods for measuring a dynamic throat diameter in a solid rocket motor

    NASA Astrophysics Data System (ADS)

    Colbaugh, Lauren

    In a solid rocket motor, nozzle throat erosion is dictated by propellant composition, throat material properties, and operating conditions. Throat erosion has a significant effect on motor performance, so it must be accurately characterized to produce a good motor design. In order to correlate throat erosion rate to other parameters, it is first necessary to know what the throat diameter is throughout a motor burn. Thus, an indirect method and a direct method for determining throat diameter in a solid rocket motor are investigated in this thesis. The indirect method looks at the use of pressure and thrust data to solve for throat diameter as a function of time. The indirect method's proof of concept was shown by the good agreement between the ballistics model and the test data from a static motor firing. The ballistics model was within 10% of all measured and calculated performance parameters (e.g. average pressure, specific impulse, maximum thrust, etc.) for tests with throat erosion and within 6% of all measured and calculated performance parameters for tests without throat erosion. The direct method involves the use of x-rays to directly observe a simulated nozzle throat erode in a dynamic environment; this is achieved with a dynamic calibration standard. An image processing algorithm is developed for extracting the diameter dimensions from the x-ray intensity digital images. Static and dynamic tests were conducted. The measured diameter was compared to the known diameter in the calibration standard. All dynamic test results were within +6% / -7% of the actual diameter. Part of the edge detection method consists of dividing the entire x-ray image by an average pixel value, calculated from a set of pixels in the x-ray image. It was found that the accuracy of the edge detection method depends upon the selection of the average pixel value area and subsequently the average pixel value. An average pixel value sensitivity analysis is presented. Both the indirect method and the direct method prove to be viable approaches to determining throat diameter during solid rocket motor operation.

  9. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  10. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  11. A travel time forecasting model based on change-point detection method

    NASA Astrophysics Data System (ADS)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  12. Season-ahead water quality forecasts for the Schuylkill River, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Block, P. J.; Leung, K.

    2013-12-01

    Anticipating and preparing for elevated water quality parameter levels in critical water sources, using weather forecasts, is not uncommon. In this study, we explore the feasibility of extending this prediction scale to a season-ahead for the Schuylkill River in Philadelphia, utilizing both statistical and dynamical prediction models, to characterize the season. This advance information has relevance for recreational activities, ecosystem health, and water treatment, as the Schuylkill provides 40% of Philadelphia's water supply. The statistical model associates large-scale climate drivers with streamflow and water quality parameter levels; numerous variables from NOAA's CFSv2 model are evaluated for the dynamical approach. A multi-model combination is also assessed. Results indicate moderately skillful prediction of average summertime total coliform and wintertime turbidity, using season-ahead oceanic and atmospheric variables, predominantly from the North Atlantic Ocean. Models predicting the number of elevated turbidity events across the wintertime season are also explored.

  13. Multivariate space - time analysis of PRE-STORM precipitation

    NASA Technical Reports Server (NTRS)

    Polyak, Ilya; North, Gerald R.; Valdes, Juan B.

    1994-01-01

    This paper presents the methodologies and results of the multivariate modeling and two-dimensional spectral and correlation analysis of PRE-STORM rainfall gauge data. Estimated parameters of the models for the specific spatial averages clearly indicate the eastward and southeastward wave propagation of rainfall fluctuations. A relationship between the coefficients of the diffusion equation and the parameters of the stochastic model of rainfall fluctuations is derived that leads directly to the exclusive use of rainfall data to estimate advection speed (about 12 m/s) as well as other coefficients of the diffusion equation of the corresponding fields. The statistical methodology developed here can be used for confirmation of physical models by comparison of the corresponding second-moment statistics of the observed and simulated data, for generating multiple samples of any size, for solving the inverse problem of the hydrodynamic equations, and for application in some other areas of meteorological and climatological data analysis and modeling.

  14. Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI

    NASA Astrophysics Data System (ADS)

    Nunes, Daniel; Cruz, Tomás L.; Jespersen, Sune N.; Shemesh, Noam

    2017-04-01

    White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo.

  15. Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI.

    PubMed

    Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N; Shemesh, Noam

    2017-04-01

    White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Evaluation of spectral domain optical coherence tomography parameters in ocular hypertension, preperimetric, and early glaucoma

    PubMed Central

    Aydoğan, Tuğba; Akçay, Betül İlkay Sezgin; Kardeş, Esra; Ergin, Ahmet

    2017-01-01

    Purpose: The objective of this study is to evaluate the diagnostic ability of retinal nerve fiber layer (RNFL), macular, optic nerve head (ONH) parameters in healthy subjects, ocular hypertension (OHT), preperimetric glaucoma (PPG), and early glaucoma (EG) patients, to reveal factors affecting the diagnostic ability of spectral domain-optical coherence tomography (SD-OCT) parameters and risk factors for glaucoma. Methods: Three hundred and twenty-six eyes (89 healthy, 77 OHT, 94 PPG, and 66 EG eyes) were analyzed. RNFL, macular, and ONH parameters were measured with SD-OCT. The area under the receiver operating characteristic curve (AUC) and sensitivity at 95% specificity was calculated. Logistic regression analysis was used to determine the glaucoma risk factors. Receiver operating characteristic regression analysis was used to evaluate the influence of covariates on the diagnostic ability of parameters. Results: In PPG patients, parameters that had the largest AUC value were average RNFL thickness (0.83) and rim volume (0.83). In EG patients, parameter that had the largest AUC value was average RNFL thickness (0.98). The logistic regression analysis showed average RNFL thickness was a risk factor for both PPG and EG. Diagnostic ability of average RNFL and average ganglion cell complex thickness increased as disease severity increased. Signal strength index did not affect diagnostic abilities. Diagnostic ability of average RNFL and rim area increased as disc area increased. Conclusion: When evaluating patients with glaucoma, patients at risk for glaucoma, and healthy controls RNFL parameters deserve more attention in clinical practice. Further studies are needed to fully understand the influence of covariates on the diagnostic ability of OCT parameters. PMID:29133640

  17. LakeVOC; A Deterministic Model to Estimate Volatile Organic Compound Concentrations in Reservoirs and Lakes

    USGS Publications Warehouse

    Bender, David A.; Asher, William E.; Zogorski, John S.

    2003-01-01

    This report documents LakeVOC, a model to estimate volatile organic compound (VOC) concentrations in lakes and reservoirs. LakeVOC represents the lake or reservoir as a two-layer system and estimates VOC concentrations in both the epilimnion and hypolimnion. The air-water flux of a VOC is characterized in LakeVOC in terms of the two-film model of air-water exchange. LakeVOC solves the system of coupled differential equations for the VOC concentration in the epilimnion, the VOC concentration in the hypolimnion, the total mass of the VOC in the lake, the volume of the epilimnion, and the volume of the hypolimnion. A series of nine simulations were conducted to verify LakeVOC representation of mixing, dilution, and gas exchange characteristics in a hypothetical lake, and two additional estimates of lake volume and MTBE concentrations were done in an actual reservoir under environmental conditions. These 11 simulations showed that LakeVOC correctly handled mixing, dilution, and gas exchange. The model also adequately estimated VOC concentrations within the epilimnion in an actual reservoir with daily input parameters. As the parameter-input time scale increased (from daily to weekly to monthly, for example), the differences between the measured-averaged concentrations and the model-estimated concentrations generally increased, especially for the hypolimnion. This may be because as the time scale is increased from daily to weekly to monthly, the averaging of model inputs may cause a loss of detail in the model estimates.

  18. Structure and Dynamics of Solvent Landscapes in Charge-Transfer Reactions

    NASA Astrophysics Data System (ADS)

    Leite, Vitor B. Pereira

    The dynamics of solvent polarization plays a major role in the control of charge transfer reactions. The success of Marcus theory describing the solvent influence via a single collective quadratic polarization coordinate has been remarkable. Onuchic and Wolynes have recently proposed (J. Chem Phys 98 (3) 2218, 1993) a simple model demonstrating how a many-dimensional-complex model composed by several dipole moments (representing solvent molecules or polar groups in proteins) can be reduced under the appropriate limits into the Marcus Model. This work presents a dynamical study of the same model, which is characterized by two parameters, an average dipole-dipole interaction as a term associated with the potential energy landscape roughness. It is shown why the effective potential, obtained using a thermodynamic approach, is appropriate for the dynamics of the system. At high temperatures, the system exhibits effective diffusive one-dimensional dynamics, where the Born-Marcus limit is recovered. At low temperatures, a glassy phase appears with a slow non-self-averaging dynamics. At intermediate temperatures, the concept of equivalent diffusion paths and polarization dependence effects are discussed. This approach is extended to treat more realistic solvent models. Real solvents are discussed in terms of simple parameters described above, and an analysis of how different regimes affect the rate of charge transfer is presented. Finally, these ideas are correlated to analogous problems in other areas.

  19. Gasification Characteristics and Kinetics of Coke with Chlorine Addition

    NASA Astrophysics Data System (ADS)

    Wang, Cui; Zhang, Jianliang; Jiao, Kexin; Liu, Zhengjian; Chou, Kuochih

    2017-10-01

    The gasification process of metallurgical coke with 0, 1.122, 3.190, and 7.132 wt pct chlorine was investigated through thermogravimetric method from ambient temperature to 1593 K (1320 °C) in purified CO2 atmosphere. The variations in the temperature parameters that T i decreases gradually with increasing chlorine, T f and T max first decrease and then increase, but both in a downward trend indicated that the coke gasification process was catalyzed by the chlorine addition. Then the kinetic model of the chlorine-containing coke gasification was obtained through the advanced determination of the average apparent activation energy, the optimal reaction model, and the pre-exponential factor. The average apparent activation energies were 182.962, 118.525, 139.632, and 111.953 kJ/mol, respectively, which were in the same decreasing trend with the temperature parameters analyzed by the thermogravimetric method. It was also demonstrated that the coke gasification process was catalyzed by chlorine. The optimal kinetic model to describe the gasification process of chlorine-containing coke was the Šesták Berggren model using Málek's method, and the pre-exponential factors were 6.688 × 105, 2.786 × 103, 1.782 × 104, and 1.324 × 103 min-1, respectively. The predictions of chlorine-containing coke gasification from the Šesták Berggren model were well fitted with the experimental data.

  20. Angular Size Test on the Expansion of the Universe

    NASA Astrophysics Data System (ADS)

    López-Corredoira, Martín

    Assuming the standard cosmological model to be correct, the average linear size of the galaxies with the same luminosity is six times smaller at z = 3.2 than at z = 0; and their average angular size for a given luminosity is approximately proportional to z-1. Neither the hypothesis that galaxies which formed earlier have much higher densities nor their luminosity evolution, merger ratio, and massive outflows due to a quasar feedback mechanism are enough to justify such a strong size evolution. Also, at high redshift, the intrinsic ultraviolet surface brightness would be prohibitively high with this evolution, and the velocity dispersion much higher than observed. We explore here another possibility of overcoming this problem: considering different cosmological scenarios, which might make the observed angular sizes compatible with a weaker evolution. One of the explored models, a very simple phenomenological extrapolation of the linear Hubble law in a Euclidean static universe, fits quite well the angular size versus redshift dependence, also approximately proportional to z-1 with this cosmological model. There are no free parameters derived ad hoc, although the error bars allow a slight size/luminosity evolution. The supernova Ia Hubble diagram can also be explained in terms of this model without any ad-hoc-fitted parameter. NB: I do not argue here that the true universe is static. My intention is just to discuss which intellectual theoretical models fit better some data of the observational cosmology.

Top