Sample records for empirical fitting parameters

  1. Charge Transport in Nonaqueous Liquid Electrolytes: A Paradigm Shift

    DTIC Science & Technology

    2015-05-18

    that provide inadequate descriptions of experimental data, often using empirical equations whose fitting parameters have no physical significance...provide inadequate descriptions of experimental data, often using empirical equations whose fitting parameters have no physical significance...Ea The hydrodynamic model, utilizing the Stokes equation describes isothermal conductivity, self-diffusion coefficient, and the dielectric

  2. Unleashing Empirical Equations with "Nonlinear Fitting" and "GUM Tree Calculator"

    NASA Astrophysics Data System (ADS)

    Lovell-Smith, J. W.; Saunders, P.; Feistel, R.

    2017-10-01

    Empirical equations having large numbers of fitted parameters, such as the international standard reference equations published by the International Association for the Properties of Water and Steam (IAPWS), which form the basis of the "Thermodynamic Equation of Seawater—2010" (TEOS-10), provide the means to calculate many quantities very accurately. The parameters of these equations are found by least-squares fitting to large bodies of measurement data. However, the usefulness of these equations is limited since uncertainties are not readily available for most of the quantities able to be calculated, the covariance of the measurement data is not considered, and further propagation of the uncertainty in the calculated result is restricted since the covariance of calculated quantities is unknown. In this paper, we present two tools developed at MSL that are particularly useful in unleashing the full power of such empirical equations. "Nonlinear Fitting" enables propagation of the covariance of the measurement data into the parameters using generalized least-squares methods. The parameter covariance then may be published along with the equations. Then, when using these large, complex equations, "GUM Tree Calculator" enables the simultaneous calculation of any derived quantity and its uncertainty, by automatic propagation of the parameter covariance into the calculated quantity. We demonstrate these tools in exploratory work to determine and propagate uncertainties associated with the IAPWS-95 parameters.

  3. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    NASA Astrophysics Data System (ADS)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  4. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    PubMed

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  5. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    PubMed Central

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123

  6. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  7. Statistically Self-Consistent and Accurate Errors for SuperDARN Data

    NASA Astrophysics Data System (ADS)

    Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.

    2018-01-01

    The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.

  8. Validation of a mathematical model of the bovine estrous cycle for cows with different estrous cycle characteristics.

    PubMed

    Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H

    2017-11-01

    A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.

  9. Model error in covariance structure models: Some implications for power and Type I error

    PubMed Central

    Coffman, Donna L.

    2010-01-01

    The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302

  10. Empirical Allometric Models to Estimate Total Needle Biomass For Loblolly Pine

    Treesearch

    Hector M. de los Santos-Posadas; Bruce E. Borders

    2002-01-01

    Empirical geometric models based on the cone surface formula were adapted and used to estimate total dry needle biomass (TNB) and live branch basal area (LBBA). The results suggest that the empirical geometric equations produced good fit and stable parameters while estimating TNB and LBBA. The data used include trees form a spacing study of 12 years old and a set of...

  11. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    NASA Astrophysics Data System (ADS)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  12. Estimating sunspot number

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Reichmann, E. J.; Teuber, D. L.

    1984-01-01

    An empirical method is developed to predict certain parameters of future solar activity cycles. Sunspot cycle statistics are examined, and curve fitting and linear regression analysis techniques are utilized.

  13. Empirical models for fitting of oral concentration time curves with and without an intravenous reference.

    PubMed

    Weiss, Michael

    2017-06-01

    Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.

  14. Comparison of all atom, continuum, and linear fitting empirical models for charge screening effect of aqueous medium surrounding a protein molecule

    NASA Astrophysics Data System (ADS)

    Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki

    2002-05-01

    To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.

  15. Bayesian parameter estimation for chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie

    2016-09-01

    The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.

  16. Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment

    NASA Technical Reports Server (NTRS)

    Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.

  17. Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions

    NASA Astrophysics Data System (ADS)

    Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.

    2002-02-01

    Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.

  18. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  19. Some Empirical Evidence for Latent Trait Model Selection.

    ERIC Educational Resources Information Center

    Hutten, Leah R.

    The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…

  20. Study of a generalized birks formula for the scintillation response of a CaMoO4 crystal

    NASA Astrophysics Data System (ADS)

    Lee, J. Y.; Kim, H. J.; Kang, Sang Jun; Lee, M. H.

    2017-12-01

    We have investigated the scintillation characteristics of CaMoO4 (CMO) crystals by using a gamma source and various internal alpha sources. A 137Cs source with 662-keV gamma-rays was used for the gamma-quanta light yield calibration. Internal radioactive contaminations provided alpha particles with different energies from 5.41 to 7.88 MeV. We developed a C++ program based on the ROOT package for the fitting of parameters in a generalized Birks semi-empirical formula by combining the experimental and the simulation data. Results for the fitted Birks parameters are k b1 = 3.3 × 10 -3 (g/MeVcm2) for the 1st parameter and k b2 = 7.9 × 10 -5 (g/MeVcm2)2 for the 2nd parameter. The χ2/n.d.f. (Number of Degree of Freedom) is calculated as 0.1/4. We were able to estimate the 238U and 234U contaminations in a CMO crystal by using the generalized Birks semi-empirical formula.

  1. Variance-based selection may explain general mating patterns in social insects.

    PubMed

    Rueppell, Olav; Johnson, Nels; Rychtár, Jan

    2008-06-23

    Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.

  2. Outdoor ground impedance models.

    PubMed

    Attenborough, Keith; Bashir, Imran; Taherzadeh, Shahram

    2011-05-01

    Many models for the acoustical properties of rigid-porous media require knowledge of parameter values that are not available for outdoor ground surfaces. The relationship used between tortuosity and porosity for stacked spheres results in five characteristic impedance models that require not more than two adjustable parameters. These models and hard-backed-layer versions are considered further through numerical fitting of 42 short range level difference spectra measured over various ground surfaces. For all but eight sites, slit-pore, phenomenological and variable porosity models yield lower fitting errors than those given by the widely used one-parameter semi-empirical model. Data for 12 of 26 grassland sites and for three beech wood sites are fitted better by hard-backed-layer models. Parameter values obtained by fitting slit-pore and phenomenological models to data for relatively low flow resistivity grounds, such as forest floors, porous asphalt, and gravel, are consistent with values that have been obtained non-acoustically. Three impedance models yield reasonable fits to a narrow band excess attenuation spectrum measured at short range over railway ballast but, if extended reaction is taken into account, the hard-backed-layer version of the slit-pore model gives the most reasonable parameter values.

  3. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  4. A Comparison of the Fit of Empirical Data to Two Latent Trait Models. Report No. 92.

    ERIC Educational Resources Information Center

    Hutten, Leah R.

    Goodness of fit of raw test score data were compared, using two latent trait models: the Rasch model and the Birnbaum three-parameter logistic model. Data were taken from various achievement tests and the Scholastic Aptitude Test (Verbal). A minimum sample size of 1,000 was required, and the minimum test length was 40 items. Results indicated that…

  5. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  6. Uncertainty quantification of reaction mechanisms accounting for correlations introduced by rate rules and fitted Arrhenius parameters

    DOE PAGES

    Prager, Jens; Najm, Habib N.; Sargsyan, Khachik; ...

    2013-02-23

    We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel-air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel-air mixture at constant-pressure. We also outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. Finally, we examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations,more » and considering both accuracy and computational efficiency.« less

  7. Equal Area Logistic Estimation for Item Response Theory

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li

    2009-08-01

    Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.

  8. Acoustic properties of reticulated plastic foams

    NASA Astrophysics Data System (ADS)

    Cummings, A.; Beadle, S. P.

    1994-08-01

    Some general aspects of sound propagation in rigid porous media are discussed, particularly with reference to the use of a single - dimensionless - frequency parameter and the role of this, in the light of the possibility of varying gas properties, is examined. Steady flow resistance coefficients of porous media are also considered, and simple scaling relationships between these coefficients and `system parameters' are derived. The results of a series of measurements of the bulk acoustic properties of 12 geometrically similar, fully reticulated, polyurethane foams are presented, and empirical curve-fitting coefficients are found; the curve-fitting formulae are valid within the experimental range of values of the frequency parameter. Comparison is made between the measured data and an alternative, fairly recently published, semi-empirical set of formulae. Measurements of the steady flow-resistive coefficients are also given and both the acoustical and flow-resistive data are shown to be consistent with theoretical ideas. The acoustical and flow-resistive data should be of use in predicting the acoustic bulk properties of open-celled foams of types similar to those used in the experimental tests.

  9. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    PubMed

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  10. Strong Ground Motion Simulation and Source Modeling of the April 1, 2006 Tai-Tung Earthquake Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2010-12-01

    The Tai-Tung earthquake (ML=6.2) occurred at the southeastern part of Taiwan on April 1, 2006. We examine the source model of this event using the observed seismograms by CWBSN at five stations surrounding the source area. An objective estimation method was used to obtain the parameters N and C which are needed for the empirical Green’s function method by Irikura (1986). This method is called “source spectral ratio fitting method” which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tai-Tung mainshock in 2006 was estimated by comparing the observed waveforms with synthetics using empirical Green’s function method. The size of the asperity is about 3.5 km length along the strike direction by 7.0 km width along the dip direction. The rupture started at the left-bottom of the asperity and extended radially to the right-upper direction.

  11. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.-C.; Lin, C.-Y.

    2012-04-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  12. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2012-12-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  13. Probabilistic biosphere modeling for the long-term safety assessment of geological disposal facilities for radioactive waste using first- and second-order Monte Carlo simulation.

    PubMed

    Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald

    2018-10-01

    In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Methods of comparing associative models and an application to retrospective revaluation.

    PubMed

    Witnauer, James E; Hutchings, Ryan; Miller, Ralph R

    2017-11-01

    Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Assessing the importance of self-regulating mechanisms in diamondback moth population dynamics: application of discrete mathematical models.

    PubMed

    Nedorezov, Lev V; Löhr, Bernhard L; Sadykova, Dinara L

    2008-10-07

    The applicability of discrete mathematical models for the description of diamondback moth (DBM) (Plutella xylostella L.) population dynamics was investigated. The parameter values for several well-known discrete time models (Skellam, Moran-Ricker, Hassell, Maynard Smith-Slatkin, and discrete logistic models) were estimated for an experimental time series from a highland cabbage-growing area in eastern Kenya. For all sets of parameters, boundaries of confidence domains were determined. Maximum calculated birth rates varied between 1.086 and 1.359 when empirical values were used for parameter estimation. After fitting of the models to the empirical trajectory, all birth rate values resulted considerably higher (1.742-3.526). The carrying capacity was determined between 13.0 and 39.9DBM/plant, after fitting of the models these values declined to 6.48-9.3, all values well within the range encountered empirically. The application of the Durbin-Watson criteria for comparison of theoretical and experimental population trajectories produced negative correlations with all models. A test of residual value groupings for randomness showed that their distribution is non-stochastic. In consequence, we conclude that DBM dynamics cannot be explained as a result of intra-population self-regulative mechanisms only (=by any of the models tested) and that more comprehensive models are required for the explanation of DBM population dynamics.

  16. Tracer kinetics of forearm endothelial function: comparison of an empirical method and a quantitative modeling technique.

    PubMed

    Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L

    2007-01-01

    Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.

  17. The Application of Some Hartree-Fock Model Calculation to the Analysis of Atomic and Free-Ion Optical Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayhurst, Thomas Laine

    1980-08-06

    Techniques for applying ab-initio calculations to the is of atomic spectra are investigated, along with the relationship between the semi-empirical and ab-initio forms of Slater-Condon theory. Slater-Condon theory is reviewed with a focus on the essential features that lead to the effective Hamiltonians associated with the semi-empirical form of the theory. Ab-initio spectroscopic parameters are calculated from wavefunctions obtained via self-consistent field methods, while multi-configuration Hamiltonian matrices are constructed and diagonalized with computer codes written by Robert Cowan of Los Alamos Scientific Laboratory. Group theoretical analysis demonstrates that wavefunctions more general than Slater determinants (i.e. wavefunctions with radial correlations betweenmore » electrons) lead to essentially the same parameterization of effective Hamiltonians. In the spirit of this analysis, a strategy is developed for adjusting ab-initio values of the spectroscopic parameters, reproducing parameters obtained by fitting the corresponding effective Hamiltonian. Secondary parameters are used to "screen" the calculated (primary) spectroscopic parameters, their values determined by least squares. Extrapolations of the secondary parameters determined from analyzed spectra are attempted to correct calculations of atoms and ions without experimental levels. The adjustment strategy and extrapolations are tested on the K I sequence from K 0+ through Fe 7+, fitting to experimental levels for V 4+, and Cr 5+; unobserved levels and spectra are predicted for several members of the sequence. A related problem is also discussed: Energy levels of the Uranium hexahalide complexes, (UX 6) 2- for X= F, Cl, Br, and I, are fit to an effective Hamiltonian (the f 2 configuration in O h symmetry) with corrections proposed by Brian Judd.« less

  18. A BRDF statistical model applying to space target materials modeling

    NASA Astrophysics Data System (ADS)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  19. Robustness of fit indices to outliers and leverage observations in structural equation modeling.

    PubMed

    Yuan, Ke-Hai; Zhong, Xiaoling

    2013-06-01

    Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  20. An extended Zel'dovich model for the halo mass function

    NASA Astrophysics Data System (ADS)

    Lim, Seunghwan; Lee, Jounghun

    2013-01-01

    A new way to construct a fitting formula for the halo mass function is presented. Our formula is expressed as a solution to the modified Jedamzik matrix equation that automatically satisfies the normalization constraint. The characteristic parameters expressed in terms of the linear shear eigenvalues are empirically determined by fitting the analytic formula to the numerical results from the high-resolution N-body simulation and found to be independent of scale, redshift and background cosmology. Our fitting formula with the best-fit parameters is shown to work excellently in the wide mass-range at various redshifts: The ratio of the analytic formula to the N-body results departs from unity by up to 10% and 5% over 1011 <= M/(h-1Msolar) <= 5 × 1015 at z = 0,0.5 and 1 for the FoF-halo and SO-halo cases, respectively.

  1. Principles of parametric estimation in modeling language competition

    PubMed Central

    Zhang, Menghan; Gong, Tao

    2013-01-01

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678

  2. Principles of parametric estimation in modeling language competition.

    PubMed

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  3. Mathematical Storage-Battery Models

    NASA Technical Reports Server (NTRS)

    Chapman, C. P.; Aston, M.

    1985-01-01

    Empirical formula represents performance of electrical storage batteries. Formula covers many battery types and includes numerous coefficients adjusted to fit peculiarities of each type. Battery and load parameters taken into account include power density in battery, discharge time, and electrolyte temperature. Applications include electric-vehicle "fuel" gages and powerline load leveling.

  4. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    NASA Astrophysics Data System (ADS)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.

  5. Weighted-density functionals for cavity formation and dispersion energies in continuum solvation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundararaman, Ravishankar; Gunceler, Deniz; Arias, T. A.

    2014-10-07

    Continuum solvation models enable efficient first principles calculations of chemical reactions in solution, but require extensive parametrization and fitting for each solvent and class of solute systems. Here, we examine the assumptions of continuum solvation models in detail and replace empirical terms with physical models in order to construct a minimally-empirical solvation model. Specifically, we derive solvent radii from the nonlocal dielectric response of the solvent from ab initio calculations, construct a closed-form and parameter-free weighted-density approximation for the free energy of the cavity formation, and employ a pair-potential approximation for the dispersion energy. We show that the resulting modelmore » with a single solvent-independent parameter: the electron density threshold (n c), and a single solvent-dependent parameter: the dispersion scale factor (s 6), reproduces solvation energies of organic molecules in water, chloroform, and carbon tetrachloride with RMS errors of 1.1, 0.6 and 0.5 kcal/mol, respectively. We additionally show that fitting the solvent-dependent s 6 parameter to the solvation energy of a single non-polar molecule does not substantially increase these errors. Parametrization of this model for other solvents, therefore, requires minimal effort and is possible without extensive databases of experimental solvation free energies.« less

  6. Weighted-density functionals for cavity formation and dispersion energies in continuum solvation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundararaman, Ravishankar; Gunceler, Deniz; Arias, T. A.

    2014-10-07

    Continuum solvation models enable efficient first principles calculations of chemical reactions in solution, but require extensive parametrization and fitting for each solvent and class of solute systems. Here, we examine the assumptions of continuum solvation models in detail and replace empirical terms with physical models in order to construct a minimally-empirical solvation model. Specifically, we derive solvent radii from the nonlocal dielectric response of the solvent from ab initio calculations, construct a closed-form and parameter-free weighted-density approximation for the free energy of the cavity formation, and employ a pair-potential approximation for the dispersion energy. We show that the resulting modelmore » with a single solvent-independent parameter: the electron density threshold (n{sub c}), and a single solvent-dependent parameter: the dispersion scale factor (s{sub 6}), reproduces solvation energies of organic molecules in water, chloroform, and carbon tetrachloride with RMS errors of 1.1, 0.6 and 0.5 kcal/mol, respectively. We additionally show that fitting the solvent-dependent s{sub 6} parameter to the solvation energy of a single non-polar molecule does not substantially increase these errors. Parametrization of this model for other solvents, therefore, requires minimal effort and is possible without extensive databases of experimental solvation free energies.« less

  7. Maximum Entropy for the International Division of Labor.

    PubMed

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.

  8. Maximum Entropy for the International Division of Labor

    PubMed Central

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052

  9. Reproducing tailing in breakthrough curves: Are statistical models equally representative and predictive?

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Bianchi, Marco

    2018-03-01

    Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 < αPL < 4). The PL exponent tends to lower values as the tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple mechanistic upscaling model based on the PLCO formulation is able to predict the ensemble of BTCs from the stochastic transport simulations without the need of any fitted parameters. The model embeds the constant αCO = 1 and relies on a stratified description of the transport mechanisms to estimate λ. The PL fails to reproduce the ensemble of BTCs at late time, while the LOG model provides consistent results as the PLCO model, however without a clear mechanistic link between physical properties and model parameters. It is concluded that, while all parametric models may work equally well (or equally wrong) for the empirical fitting of the experimental BTCs tails due to the effects of subsampling, for predictive purposes this is not true. A careful selection of the proper heavily tailed models and corresponding parameters is required to ensure physically-based transport predictions.

  10. An Empirical Calibration of the Mixing-Length Parameter α

    NASA Astrophysics Data System (ADS)

    Ferraro, Francesco R.; Valenti, Elena; Straniero, Oscar; Origlia, Livia

    2006-05-01

    We present an empirical calibration of the mixing-length free parameter α based on a homogeneous infrared database of 28 Galactic globular clusters spanning a wide metallicity range (-2.15<[Fe/H]<-0.2). Empirical estimates of the red giant effective temperatures have been obtained from infrared colors. Suitable relations linking these temperatures to the cluster metallicity have been obtained and compared to theoretical predictions. An appropriate set of models for the Sun and Population II giants has been computed by using both the standard solar metallicity (Z/X)solar=0.0275 and the most recently proposed value (Z/X)solar=0.0177. We find that when the standard solar metallicity is adopted, a unique value of α=2.17 can be used to reproduce both the solar radius and the Population II red giant temperature. Conversely, when the new solar metallicity is adopted, two different values of α are required: α=1.86 to fit the solar radius and α~2.0 to fit the red giant temperatures. However, it must be noted that regardless the adopted solar reference, the α-parameter does not show any significant dependence on metallicity. Based on observations collected at the European Southern Observatory (ESO), La Silla, Chile. Also based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundacion Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.

  11. Option price and market instability

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yu, Miao

    2017-04-01

    An option pricing formula, for which the price of an option depends on both the value of the underlying security as well as the velocity of the security, has been proposed in Baaquie and Yang (2014). The FX (foreign exchange) options price was empirically studied in Baaquie et al., (2014), and it was found that the model in general provides an excellent fit for all strike prices with a fixed model parameters-unlike the Black-Scholes option price Hull and White (1987) that requires the empirically determined implied volatility surface to fit the option data. The option price proposed in Baaquie and Cao Yang (2014) did not fit the data during the crisis of 2007-2008. We make a hypothesis that the failure of the option price to fit data is an indication of the market's large deviation from its near equilibrium behavior due to the market's instability. Furthermore, our indicator of market's instability is shown to be more accurate than the option's observed volatility. The market prices of the FX option for various currencies are studied in the light of our hypothesis.

  12. A Comprehensive Physical Impedance Model of Polymer Electrolyte Fuel Cell Cathodes in Oxygen-free Atmosphere.

    PubMed

    Obermaier, Michael; Bandarenka, Aliaksandr S; Lohri-Tymozhynsky, Cyrill

    2018-03-21

    Electrochemical impedance spectroscopy (EIS) is an indispensable tool for non-destructive operando characterization of Polymer Electrolyte Fuel Cells (PEFCs). However, in order to interpret the PEFC's impedance response and understand the phenomena revealed by EIS, numerous semi-empirical or purely empirical models are used. In this work, a relatively simple model for PEFC cathode catalyst layers in absence of oxygen has been developed, where all the equivalent circuit parameters have an entire physical meaning. It is based on: (i) experimental quantification of the catalyst layer pore radii, (ii) application of De Levie's analytical formula to calculate the response of a single pore, (iii) approximating the ionomer distribution within every pore, (iv) accounting for the specific adsorption of sulfonate groups and (v) accounting for a small H 2 crossover through ~15 μm ionomer membranes. The derived model has effectively only 6 independent fitting parameters and each of them has clear physical meaning. It was used to investigate the cathode catalyst layer and the double layer capacitance at the interface between the ionomer/membrane and Pt-electrocatalyst. The model has demonstrated excellent results in fitting and interpretation of the impedance data under different relative humidities. A simple script enabling fitting of impedance data is provided as supporting information.

  13. Seven-parameter statistical model for BRDF in the UV band.

    PubMed

    Bai, Lu; Wu, Zhensen; Zou, Xiren; Cao, Yunhua

    2012-05-21

    A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retro-reflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the five- and seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band.

  14. Statistical distributions of extreme dry spell in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Jemain, Abdul Aziz

    2010-11-01

    Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.

  15. Universal empirical fit to L-shell X-ray production cross sections in ionization by protons

    NASA Astrophysics Data System (ADS)

    Lapicki, G.; Miranda, J.

    2018-01-01

    A compilation published in 2014, with a recent 2017 update, contains 5730 experimental total L-shell X-ray production cross sections (XRPCS). The database covers an energy range from 10 keV to 1 GeV, and targets from 18Ar to 95Am. With only two adjustable parameters, universal fit to these data normalized to XRPCS calculated at proton velocity v1 equal to the electron velocity in the L-shell v2L, is obtained in terms of a single ratio of v1/v2L. This fit reproduces 97% of the compiled XRPCS to within a factor of 2.

  16. Evaluation of an activated carbon packed bed for the adsorption of phenols from petroleum refinery wastewater.

    PubMed

    El-Naas, Muftah H; Alhaija, Manal A; Al-Zuhair, Sulaiman

    2017-03-01

    The performance of an adsorption column packed with granular activated carbon was evaluated for the removal of phenols from refinery wastewater. The effects of phenol feed concentration (80-182 mg/l), feed flow rate (5-20 ml/min), and activated carbon packing mass (5-15 g) on the breakthrough characteristics of the adsorption system were determined. The continuous adsorption process was simulated using batch data and the parameters for a new empirical model were determined. Different dynamic models such as Adams-Bohart, Wolborsko, Thomas, and Yoon-Nelson models were also fitted to the experimental data for the sake of comparison. The empirical, Yoon-Nelson and Thomas models showed a high degree of fitting at different operation conditions, with the empirical model giving the best fit based on the Akaike information criterion (AIC). At an initial phenol concentration of 175 mg/l, packing mass of 10 g, a flow rate of 10 ml/min and a temperature of 25 °C, the SSE of the new empirical and Thomas models were identical (248.35) and very close to that of the Yoon-Nelson model (259.49). The values were significantly lower than that of the Adams-Bohart model, which was determined to be 19,358.48. The superiority of the new empirical model and the Thomas model was also confirmed from the values of the R 2 and AIC, which were 0.99 and 38.3, respectively, compared to 0.92 and 86.2 for Adams-Bohart model.

  17. Apparent cosmic acceleration from Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Dam, Lawrence H.; Heinesen, Asta; Wiltshire, David L.

    2017-11-01

    Parameters that quantify the acceleration of cosmic expansion are conventionally determined within the standard Friedmann-Lemaître-Robertson-Walker (FLRW) model, which fixes spatial curvature to be homogeneous. Generic averages of Einstein's equations in inhomogeneous cosmology lead to models with non-rigidly evolving average spatial curvature, and different parametrizations of apparent cosmic acceleration. The timescape cosmology is a viable example of such a model without dark energy. Using the largest available supernova data set, the JLA catalogue, we find that the timescape model fits the luminosity distance-redshift data with a likelihood that is statistically indistinguishable from the standard spatially flat Λ cold dark matter cosmology by Bayesian comparison. In the timescape case cosmic acceleration is non-zero but has a marginal amplitude, with best-fitting apparent deceleration parameter, q_{0}=-0.043^{+0.004}_{-0.000}. Systematic issues regarding standardization of supernova light curves are analysed. Cuts of data at the statistical homogeneity scale affect light-curve parameter fits independent of cosmology. A cosmological model dependence of empirical changes to the mean colour parameter is also found. Irrespective of which model ultimately fits better, we argue that as a competitive model with a non-FLRW expansion history, the timescape model may prove a useful diagnostic tool for disentangling selection effects and astrophysical systematics from the underlying expansion history.

  18. Three-dimensional whole-brain perfusion quantification using pseudo-continuous arterial spin labeling MRI at multiple post-labeling delays: accounting for both arterial transit time and impulse response function.

    PubMed

    Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M

    2014-02-01

    Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.

  19. SU-E-T-439: An Improved Formula of Scatter-To-Primary Ratio for Photon Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T

    2014-06-01

    Purpose: Scatter-to-primary ratio (SPR) is an important dosimetric quantity that describes the contribution from the scatter photons in an external photon beam. The purpose of this study is to develop an improved analytical formula to describe SPR as a function of circular field size (r) and depth (d) using Monte Carlo (MC) simulation. Methods: MC simulation was performed for Mohan photon spectra (Co-60, 4, 6, 10, 15, 23 MV) using EGSNRC code. Point-spread scatter dose kernels in water are generated. The scatter-to-primary ratio (SPR) is also calculated using MC simulation as a function of field size for circular field sizemore » with radius r and depth d. The doses from forward scatter and backscatter photons are calculated using a convolution of the point-spread scatter dose kernel and by accounting for scatter photons contributing to dose before (z'd) reaching the depth of interest, d, where z' is the location of scatter photons, respectively. The depth dependence of the ratio of the forward scatter and backscatter doses is determined as a function of depth and field size. Results: We are able to improve the existing 3-parameter (a, w, d0) empirical formula for SPR by introducing depth dependence for one of the parameter d0, which becomes 0 for deeper depths. The depth dependence of d0 can be directly calculated as a ratio of backscatter-to-forward scatter doses for otherwise the same field and depth. With the improved empirical formula, we can fit SPR for all megavoltage photon beams to within 2%. Existing 3-parameter formula cannot fit SPR data for Co-60 to better than 3.1%. Conclusion: An improved empirical formula is developed to fit SPR for all megavoltage photon energies to within 2%.« less

  20. Quantitative genetic versions of Hamilton's rule with empirical applications

    PubMed Central

    McGlothlin, Joel W.; Wolf, Jason B.; Brodie, Edmund D.; Moore, Allen J.

    2014-01-01

    Hamilton's theory of inclusive fitness revolutionized our understanding of the evolution of social interactions. Surprisingly, an incorporation of Hamilton's perspective into the quantitative genetic theory of phenotypic evolution has been slow, despite the popularity of quantitative genetics in evolutionary studies. Here, we discuss several versions of Hamilton's rule for social evolution from a quantitative genetic perspective, emphasizing its utility in empirical applications. Although evolutionary quantitative genetics offers methods to measure each of the critical parameters of Hamilton's rule, empirical work has lagged behind theory. In particular, we lack studies of selection on altruistic traits in the wild. Fitness costs and benefits of altruism can be estimated using a simple extension of phenotypic selection analysis that incorporates the traits of social interactants. We also discuss the importance of considering the genetic influence of the social environment, or indirect genetic effects (IGEs), in the context of Hamilton's rule. Research in social evolution has generated an extensive body of empirical work focusing—with good reason—almost solely on relatedness. We argue that quantifying the roles of social and non-social components of selection and IGEs, in addition to relatedness, is now timely and should provide unique additional insights into social evolution. PMID:24686930

  1. Path integral for equities: Dynamic correlation and empirical analysis

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Cao, Yang; Lau, Ada; Tang, Pan

    2012-02-01

    This paper develops a model to describe the unequal time correlation between rate of returns of different stocks. A non-trivial fourth order derivative Lagrangian is defined to provide an unequal time propagator, which can be fitted to the market data. A calibration algorithm is designed to find the empirical parameters for this model and different de-noising methods are used to capture the signals concealed in the rate of return. The detailed results of this Gaussian model show that the different stocks can have strong correlation and the empirical unequal time correlator can be described by the model's propagator. This preliminary study provides a novel model for the correlator of different instruments at different times.

  2. Investigation of 14-15 MeV ( n, t) Reaction Cross-sections by Using New Evaluated Empirical and Semi-empirical Systematic Formulas

    NASA Astrophysics Data System (ADS)

    Tel, E.; Aydın, A.; Kaplan, A.; Şarer, B.

    2008-09-01

    In the hybrid reactor, tritium self-sufficiency must be maintained for a commercial power plant. For self-sustaining (D-T) fusion driver tritium breeding ratio should be greater than 1.05. Working out the systematics of ( n, t) reaction cross-sections are of great importance for the definition of the excitation function character for the given reaction taking place on various nuclei at energies up to 20 MeV. In this study we have investigated asymmetry term effect for the ( n, t) reaction cross-sections at 14-15 neutron incident energy. It has been discussed the odd-even effect and the pairing effect considering binding energy systematic of the nuclear shell model for the new experimental data and new cross-sections formulas ( n, t) reactions developed by Tel et al. We have determined a different parameter groups by the classification of nuclei into even-even, even-odd and odd-even for ( n, t) reactions cross-sections. The obtained empirical and semi-empirical formulas by fitting two parameter for ( n, t) reactions were given. All calculated results have been compared with the experimental data and the other semi-empirical formulas.

  3. SYNCHROTRON ORIGIN OF THE TYPICAL GRB BAND FUNCTION—A CASE STUDY OF GRB 130606B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Bin-Bin; Briggs, Michael S.; Uhm, Z. Lucas

    2016-01-10

    We perform a time-resolved spectral analysis of GRB 130606B within the framework of a fast-cooling synchrotron radiation model with magnetic field strength in the emission region decaying with time, as proposed by Uhm and Zhang. The data from all time intervals can be successfully fit by the model. The same data can be equally well fit by the empirical Band function with typical parameter values. Our results, which involve only minimal physical assumptions, offer one natural solution to the origin of the observed GRB spectra and imply that, at least some, if not all, Band-like GRB spectra with typical Bandmore » parameter values can indeed be explained by synchrotron radiation.« less

  4. Investigation of a Nonparametric Procedure for Assessing Goodness-of-Fit in Item Response Theory

    ERIC Educational Resources Information Center

    Wells, Craig S.; Bolt, Daniel M.

    2008-01-01

    Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…

  5. Empirical evidence for multi-scaled controls on wildfire size distributions in California

    NASA Astrophysics Data System (ADS)

    Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.

    2014-12-01

    Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California wildfire ecosystems appear to be adaptive, governed by stationary and non-stationary controls, which may be either exogenous or endogenous to the system.

  6. An empirical model for dissolution profile and its application to floating dosage forms.

    PubMed

    Weiss, Michael; Kriangkrai, Worawut; Sungthongjeen, Srisagul

    2014-06-02

    A sum of two inverse Gaussian functions is proposed as a highly flexible empirical model for fitting of in vitro dissolution profiles. The model was applied to quantitatively describe theophylline release from effervescent multi-layer coated floating tablets containing different amounts of the anti-tacking agents talc or glyceryl monostearate. Model parameters were estimated by nonlinear regression (mixed-effects modeling). The estimated parameters were used to determine the mean dissolution time, as well as to reconstruct the time course of release rate for each formulation, whereby the fractional release rate can serve as a diagnostic tool for classification of dissolution processes. The approach allows quantification of dissolution behavior and could provide additional insights into the underlying processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Number of independent parameters in the potentiometric titration of humic substances.

    PubMed

    Lenoir, Thomas; Manceau, Alain

    2010-03-16

    With the advent of high-precision automatic titrators operating in pH stat mode, measuring the mass balance of protons in solid-solution mixtures against the pH of natural and synthetic polyelectrolytes is now routine. However, titration curves of complex molecules typically lack obvious inflection points, which complicates their analysis despite the high-precision measurements. The calculation of site densities and median proton affinity constants (pK) from such data can lead to considerable covariance between fit parameters. Knowing the number of independent parameters that can be freely varied during the least-squares minimization of a model fit to titration data is necessary to improve the model's applicability. This number was calculated for natural organic matter by applying principal component analysis (PCA) to a reference data set of 47 independent titration curves from fulvic and humic acids measured at I = 0.1 M. The complete data set was reconstructed statistically from pH 3.5 to 9.8 with only six parameters, compared to seven or eight generally adjusted with common semi-empirical speciation models for organic matter, and explains correlations that occur with the higher number of parameters. Existing proton-binding models are not necessarily overparametrized, but instead titration data lack the sensitivity needed to quantify the full set of binding properties of humic materials. Model-independent conditional pK values can be obtained directly from the derivative of titration data, and this approach is the most conservative. The apparent proton-binding constants of the 23 fulvic acids (FA) and 24 humic acids (HA) derived from a high-quality polynomial parametrization of the data set are pK(H,COOH)(FA) = 4.18 +/- 0.21, pK(H,Ph-OH)(FA) = 9.29 +/- 0.33, pK(H,COOH)(HA) = 4.49 +/- 0.18, and pK(H,Ph-OH)(HA) = 9.29 +/- 0.38. Their values at other ionic strengths are more reliably calculated with the empirical Davies equation than any existing model fit.

  8. Stochastic modelling of non-stationary financial assets

    NASA Astrophysics Data System (ADS)

    Estevens, Joana; Rocha, Paulo; Boto, João P.; Lind, Pedro G.

    2017-11-01

    We model non-stationary volume-price distributions with a log-normal distribution and collect the time series of its two parameters. The time series of the two parameters are shown to be stationary and Markov-like and consequently can be modelled with Langevin equations, which are derived directly from their series of values. Having the evolution equations of the log-normal parameters, we reconstruct the statistics of the first moments of volume-price distributions which fit well the empirical data. Finally, the proposed framework is general enough to study other non-stationary stochastic variables in other research fields, namely, biology, medicine, and geology.

  9. Statistical parameters of thermally driven turbulent anabatic flow

    NASA Astrophysics Data System (ADS)

    Hilel, Roni; Liberzon, Dan

    2016-11-01

    Field measurements of thermally driven turbulent anabatic flow over a moderate slope are reported. A collocated hot-films-sonic anemometer (Combo) obtained the finer scales of the flow by implementing a Neural Networks based in-situ calibration technique. Eight days of continuous measurements of the wind and temperature fluctuations reviled a diurnal pattern of unstable stratification that forced development of highly turbulent unidirectional up slope flow. Empirical fits of important turbulence statistics were obtained from velocity fluctuations' time series alongside fully resolved spectra of velocity field components and characteristic length scales. TKE and TI showed linear dependence on Re, while velocity derivative skewness and dissipation rates indicated the anisotropic nature of the flow. Empirical fits of normalized velocity fluctuations power density spectra were derived as spectral shapes exhibited high level of similarity. Bursting phenomenon was detected at 15% of the total time. Frequency of occurrence, spectral characteristics and possible generation mechanism are discussed. BSF Grant #2014075.

  10. High-resolution empirical geomagnetic field model TS07D: Investigating run-on-request and forecasting modes of operation

    NASA Astrophysics Data System (ADS)

    Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Vandegriff, J. D.; Tsyganenko, N. A.

    2010-12-01

    The dramatic increase of the geomagnetic field data volume available due to many recent missions, including GOES, Polar, Geotail, Cluster, and THEMIS, required at some point the appropriate qualitative transition in the empirical modeling tools. Classical empirical models, such as T96 and T02, used few custom-tailored modules to represent major magnetospheric current systems and simple data binning or loading-unloading inputs for their fitting with data and the subsequent applications. They have been replaced by more systematic expansions of the equatorial and field-aligned current contributions as well as by the advanced data-mining algorithms searching for events with the global activity parameters, such as the Sym-H index, similar to those at the time of interest, as is done in the model TS07D (Tsyganenko and Sitnov, 2007; Sitnov et al., 2008). The necessity to mine and fit data dynamically, with the individual subset of the database being used to reproduce the geomagnetic field pattern at every new moment in time, requires the corresponding transition in the use of the new empirical geomagnetic field models. It becomes more similar to runs-on-request offered by the Community Coordinated Modeling Center for many first principles MHD and kinetic codes. To provide this mode of operation for the TS07D model a new web-based modeling tool has been created and tested at the JHU/APL (http://geomag_field.jhuapl.edu/model/), and we discuss the first results of its performance testing and validation, including in-sample and out-of-sample modeling of a number of CME- and CIR-driven magnetic storms. We also report on the first tests of the forecasting version of the TS07D model, where the magnetospheric part of the macro-parameters involved in the data-binning process (Sym-H index and its trend parameter) are replaced by their solar wind-based analogs obtained using the Burton-McPherron-Russell approach.

  11. A FORTRAN Computer Program to Perform Goodness of Fit Testing on Empirical Data.

    DTIC Science & Technology

    1979-06-01

    11 9. Mesokurtic Shape ....... ................. 1210. Platykurtic Shape ..... .. ................ 12 11. Leptokurtic Shape...distribution is platykurtic and if K is greater than 3, the distribution is described as leptokurtic. Figures 9, 10, and 11 illustrate mesokurtic... platykurtic , and leptokurtic shapes (8). Figure 9 Figure 10 Figure 11 Mesokurtic Shape Platykurtic Shape Leptokurtic Shape The population parameters for

  12. Atomic scale simulations for improved CRUD and fuel performance modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersson, Anders David Ragnar; Cooper, Michael William Donald

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  13. Method development estimating ambient mercury concentration from monitored mercury wet deposition

    NASA Astrophysics Data System (ADS)

    Chen, S. M.; Qiu, X.; Zhang, L.; Yang, F.; Blanchard, P.

    2013-05-01

    Speciated atmospheric mercury data have recently been monitored at multiple locations in North America; but the spatial coverage is far less than the long-established mercury wet deposition network. The present study describes a first attempt linking ambient concentration with wet deposition using Beta distribution fitting of a ratio estimate. The mean, median, mode, standard deviation, and skewness of the fitted Beta distribution parameters were generated using data collected in 2009 at 11 monitoring stations. Comparing the normalized histogram and the fitted density function, the empirical and fitted Beta distribution of the ratio shows a close fit. The estimated ambient mercury concentration was further partitioned into reactive gaseous mercury and particulate bound mercury using linear regression model developed by Amos et al. (2012). The method presented here can be used to roughly estimate mercury ambient concentration at locations and/or times where such measurement is not available but where wet deposition is monitored.

  14. Comparison of Ultra-Rapid Orbit Prediction Strategies for GPS, GLONASS, Galileo and BeiDou.

    PubMed

    Geng, Tao; Zhang, Peng; Wang, Wei; Xie, Xin

    2018-02-06

    Currently, ultra-rapid orbits play an important role in the high-speed development of global navigation satellite system (GNSS) real-time applications. This contribution focuses on the impact of the fitting arc length of observed orbits and solar radiation pressure (SRP) on the orbit prediction performance for GPS, GLONASS, Galileo and BeiDou. One full year's precise ephemerides during 2015 were used as fitted observed orbits and then as references to be compared with predicted orbits, together with known earth rotation parameters. The full nine-parameter Empirical Center for Orbit Determination in Europe (CODE) Orbit Model (ECOM) and its reduced version were chosen in our study. The arc lengths of observed fitted orbits that showed the smallest weighted root mean squares (WRMSs) and medians of the orbit differences after a Helmert transformation fell between 40 and 45 h for GPS and GLONASS and between 42 and 48 h for Galileo, while the WRMS values and medians become flat after a 42 h arc length for BeiDou. The stability of the Helmert transformation and SRP parameters also confirmed the similar optimal arc lengths. The range around 42-45 h is suggested to be the optimal arc length interval of the fitted observed orbits for the multi-GNSS joint solution of ultra-rapid orbits.

  15. Comparison of Ultra-Rapid Orbit Prediction Strategies for GPS, GLONASS, Galileo and BeiDou

    PubMed Central

    Zhang, Peng; Wang, Wei; Xie, Xin

    2018-01-01

    Currently, ultra-rapid orbits play an important role in the high-speed development of global navigation satellite system (GNSS) real-time applications. This contribution focuses on the impact of the fitting arc length of observed orbits and solar radiation pressure (SRP) on the orbit prediction performance for GPS, GLONASS, Galileo and BeiDou. One full year’s precise ephemerides during 2015 were used as fitted observed orbits and then as references to be compared with predicted orbits, together with known earth rotation parameters. The full nine-parameter Empirical Center for Orbit Determination in Europe (CODE) Orbit Model (ECOM) and its reduced version were chosen in our study. The arc lengths of observed fitted orbits that showed the smallest weighted root mean squares (WRMSs) and medians of the orbit differences after a Helmert transformation fell between 40 and 45 h for GPS and GLONASS and between 42 and 48 h for Galileo, while the WRMS values and medians become flat after a 42 h arc length for BeiDou. The stability of the Helmert transformation and SRP parameters also confirmed the similar optimal arc lengths. The range around 42–45 h is suggested to be the optimal arc length interval of the fitted observed orbits for the multi-GNSS joint solution of ultra-rapid orbits. PMID:29415467

  16. SU-F-T-158: Experimental Characterization of Field Size Dependence of Dose and Lateral Beam Profiles of Scanning Proton and Carbon Ion Beams for Empirical Model in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Hsi, W; Zhao, J

    2016-06-15

    Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less

  17. Empirical fitness landscapes and the predictability of evolution.

    PubMed

    de Visser, J Arjan G M; Krug, Joachim

    2014-07-01

    The genotype-fitness map (that is, the fitness landscape) is a key determinant of evolution, yet it has mostly been used as a superficial metaphor because we know little about its structure. This is now changing, as real fitness landscapes are being analysed by constructing genotypes with all possible combinations of small sets of mutations observed in phylogenies or in evolution experiments. In turn, these first glimpses of empirical fitness landscapes inspire theoretical analyses of the predictability of evolution. Here, we review these recent empirical and theoretical developments, identify methodological issues and organizing principles, and discuss possibilities to develop more realistic fitness landscape models.

  18. A new UK fission yield evaluation UKFY3.7

    NASA Astrophysics Data System (ADS)

    Mills, Robert William

    2017-09-01

    The JEFF neutron induced and spontaneous fission product yield evaluation is currently unchanged from JEFF-3.1.1, also known by its UK designation UKFY3.6A. It is based upon experimental data combined with empirically fitted mass, charge and isomeric state models which are then adjusted within the experimental and model uncertainties to conform to the physical constraints of the fission process. A new evaluation has been prepared for JEFF, called UKFY3.7, that incorporates new experimental data and replaces the current empirical models (multi-Gaussian fits of mass distribution and Wahl Zp model for charge distribution combined with parameter extrapolation), with predictions from GEF. The GEF model has the advantage that one set of parameters allows the prediction of many different fissioning nuclides at different excitation energies unlike previous models where each fissioning nuclide at a specific excitation energy had to be fitted individually to the relevant experimental data. The new UKFY3.7 evaluation, submitted for testing as part of JEFF-3.3, is described alongside initial results of testing. In addition, initial ideas for future developments allowing inclusion of new measurements types and changing from any neutron spectrum type to true neutron energy dependence are discussed. Also, a method is proposed to propagate uncertainties of fission product yields based upon the experimental data that underlies the fission yield evaluation. The covariance terms being determined from the evaluated cumulative and independent yields combined with the experimental uncertainties on the cumulative yield measurements.

  19. Semi-empirical calculations of line-shape parameters and their temperature dependences for the ν6 band of CH3D perturbed by N2

    NASA Astrophysics Data System (ADS)

    Dudaryonok, A. S.; Lavrentieva, N. N.; Buldyreva, J.

    2018-06-01

    (J, K)-line broadening and shift coefficients with their temperature-dependence characteristics are computed for the perpendicular (ΔK = ±1) ν6 band of the 12CH3D-N2 system. The computations are based on a semi-empirical approach which consists in the use of analytical Anderson-type expressions multiplied by a few-parameter correction factor to account for various deviations from Anderson's theory approximations. A mathematically convenient form of the correction factor is chosen on the basis of experimental rotational dependencies of line widths, and its parameters are fitted on some experimental line widths at 296 K. To get the unknown CH3D polarizability in the excited vibrational state v6 for line-shift calculations, a parametric vibration-state-dependent expression is suggested, with two parameters adjusted on some room-temperature experimental values of line shifts. Having been validated by comparison with available in the literature experimental values for various sub-branches of the band, this approach is used to generate massive data of line-shape parameters for extended ranges of rotational quantum numbers (J up to 70 and K up to 20) typically requested for spectroscopic databases. To obtain the temperature-dependence characteristics of line widths and line shifts, computations are done for various temperatures in the range 200-400 K recommended for HITRAN and least-squares fit procedures are applied. For the case of line widths strong sub-branch dependence with increasing K is observed in the R- and P-branches; for the line shifts such dependence is stated for the Q-branch.

  20. Averaging cross section data so we can fit it

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D.

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  1. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation.

    PubMed

    Yilmaz, A Erdem; Boncukcuoğlu, Recep; Kocakerim, M Muhtar

    2007-06-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0 mA/cm(2), initial boron concentration 100mg/L and solution temperature 293 K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following; [formula in text]. Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  2. A new world survey expression for cosmic ray vertical intensity vs. depth in standard rock

    NASA Technical Reports Server (NTRS)

    Crouch, M.

    1985-01-01

    The cosmic ray data on vertical intensity versus depth below 10 to the 5th power g sq cm is fitted to a 5 parameter empirical formula to give an analytical expression for interpretation of muon fluxes in underground measurements. This expression updates earlier published results and complements the more precise curves obtained by numerical integration or Monte Carlo techniques in which the fit is made to an energy spectrum at the top of the atmosphere. The expression is valid in the transitional region where neutrino induced muons begin to be important, as well as at great depths where this component becomes dominant.

  3. Dielectric response of molecules in empirical tight-binding theory

    NASA Astrophysics Data System (ADS)

    Boykin, Timothy B.; Vogl, P.

    2002-01-01

    In this paper we generalize our previous approach to electromagnetic interactions within empirical tight-binding theory to encompass molecular solids and isolated molecules. In order to guarantee physically meaningful results, we rederive the expressions for relevant observables using commutation relations appropriate to the finite tight-binding Hilbert space. In carrying out this generalization, we examine in detail the consequences of various prescriptions for the position and momentum operators in tight binding. We show that attempting to fit parameters of the momentum matrix directly generally results in a momentum operator which is incompatible with the underlying tight-binding model, while adding extra position parameters results in numerous difficulties, including the loss of gauge invariance. We have applied our scheme, which we term the Peierls-coupling tight-binding method, to the optical dielectric function of the molecular solid PPP, showing that this approach successfully predicts its known optical properties even in the limit of isolated molecules.

  4. The interactions between vegetation and climate seasonality, topography on different time scales under the Budyko framework: case study in China's Loess Plateau

    NASA Astrophysics Data System (ADS)

    Liu, W.; Ning, T.; Shen, H.; Li, Z.

    2017-12-01

    Vegetation, climate seasonality and topography are the main impact factors controlling the water and heat balance over a catchment, and they are usually empirically formulated into the controlling parameter in Budyko model. However, their interactions on different time scales have not been fully addressed. Taking 30 catchments in China's Loess Plateau as an example, on annual scale, vegetation coverage was found poorly correlated with climate seasonality index; therefore, they could be both parameterized into the Budyko model. On the long-term scale, vegetation coverage tended to have close relationships with topographic conditions and climate seasonality, which was confirmed by the multi-collinearity problems; in that sense, vegetation information could fit the controlling parameter exclusively. Identifying the dominant controlling factors over different time scales, this study simplified the empirical parameterization of the Budyko formula. Though the above relationships further investigation over the other regions/catchments.

  5. On fitting the Pareto Levy distribution to stock market index data: Selecting a suitable cutoff value

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.

    2005-08-01

    The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).

  6. An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.

    2013-01-01

    Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…

  7. Empirical predictions of hypervelocity impact damage to the space station

    NASA Technical Reports Server (NTRS)

    Rule, W. K.; Hayashida, K. B.

    1991-01-01

    A family of user-friendly, DOS PC based, Microsoft BASIC programs written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft is described. The spacecraft wall configuration is assumed to consist of multilayer insulation (MLI) placed between a Whipple style bumper and the pressure wall. Predictions are based on data sets of experimental results obtained from simulating debris impacts on spacecraft using light gas guns on Earth. A module of the program facilitates the creation of the data base of experimental results that are used by the damage prediction modules of the code. The user has the choice of three different prediction modules to predict damage to the bumper, the MLI, and the pressure wall. One prediction module is based on fitting low order polynomials through subsets of the experimental data. Another prediction module fits functions based on nondimensional parameters through the data. The last prediction technique is a unique approach that is based on weighting the experimental data according to the distance from the design point.

  8. AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjib; Bland-Hawthorn, Joss

    2013-08-20

    An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less

  9. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student’s t-distribution*

    PubMed Central

    Leão, William L.; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210

  10. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student's t-distribution.

    PubMed

    Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.

  11. The parameters of death: a consideration of the quantity of information in a life table using a polynomial representation of the survivorship curve.

    PubMed

    Anson, J

    1988-08-01

    How much unique information is contained in any life table? The logarithmic survivorship (lx) columns of 360 empirical life tables were fitted by a weighted fifth degree polynomial, and it is shown that six parameters are adequate to reproduce these curves almost flawlessly. However, these parameters are highly intercorrelated, so that a two-dimensional representation would be adequate to express the similarities and differences among life tables. It is thus concluded that a life table contains but two unique pieces of information, these being the level of mortality in the population which it represents, and the relative shape of the underlying mortality curve.

  12. Bernoulli-Langevin Wind Speed Model for Simulation of Storm Events

    NASA Astrophysics Data System (ADS)

    Fürstenau, Norbert; Mittendorf, Monika

    2016-12-01

    We present a simple nonlinear dynamics Langevin model for predicting the instationary wind speed profile during storm events typically accompanying extreme low-pressure situations. It is based on a second-degree Bernoulli equation with δ-correlated Gaussian noise and may complement stationary stochastic wind models. Transition between increasing and decreasing wind speed and (quasi) stationary normal wind and storm states are induced by the sign change of the controlling time-dependent rate parameter k(t). This approach corresponds to the simplified nonlinear laser dynamics for the incoherent to coherent transition of light emission that can be understood by a phase transition analogy within equilibrium thermodynamics [H. Haken, Synergetics, 3rd ed., Springer, Berlin, Heidelberg, New York 1983/2004.]. Evidence for the nonlinear dynamics two-state approach is generated by fitting of two historical wind speed profiles (low-pressure situations "Xaver" and "Christian", 2013) taken from Meteorological Terminal Air Report weather data, with a logistic approximation (i.e. constant rate coefficients k) to the solution of our dynamical model using a sum of sigmoid functions. The analytical solution of our dynamical two-state Bernoulli equation as obtained with a sinusoidal rate ansatz k(t) of period T (=storm duration) exhibits reasonable agreement with the logistic fit to the empirical data. Noise parameter estimates of speed fluctuations are derived from empirical fit residuals and by means of a stationary solution of the corresponding Fokker-Planck equation. Numerical simulations with the Bernoulli-Langevin equation demonstrate the potential for stochastic wind speed profile modeling and predictive filtering under extreme storm events that is suggested for applications in anticipative air traffic management.

  13. A new simple local muscle recovery model and its theoretical and experimental validation.

    PubMed

    Ma, Liang; Zhang, Wei; Wu, Su; Zhang, Zhanwu

    2015-01-01

    This study was conducted to provide theoretical and experimental validation of a local muscle recovery model. Muscle recovery has been modeled in different empirical and theoretical approaches to determine work-rest allowance for musculoskeletal disorder (MSD) prevention. However, time-related parameters and individual attributes have not been sufficiently considered in conventional approaches. A new muscle recovery model was proposed by integrating time-related task parameters and individual attributes. Theoretically, this muscle recovery model was compared to other theoretical models mathematically. Experimentally, a total of 20 subjects participated in the experimental validation. Hand grip force recovery and shoulder joint strength recovery were measured after a fatiguing operation. The recovery profile was fitted by using the recovery model, and individual recovery rates were calculated as well after fitting. Good fitting values (r(2) > .8) were found for all the subjects. Significant differences in recovery rates were found among different muscle groups (p < .05). The theoretical muscle recovery model was primarily validated by characterization of the recovery process after fatiguing operation. The determined recovery rate may be useful to represent individual recovery attribute.

  14. Bringing Science to Bear: An Empirical Assessment of the Comprehensive Soldier Fitness Program

    ERIC Educational Resources Information Center

    Lester, Paul B.; McBride, Sharon; Bliese, Paul D.; Adler, Amy B.

    2011-01-01

    This article outlines the U.S. Army's effort to empirically validate and assess the Comprehensive Soldier Fitness (CSF) program. The empirical assessment includes four major components. First, the CSF scientific staff is currently conducting a longitudinal study to determine if the Master Resilience Training program and the Comprehensive…

  15. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    NASA Astrophysics Data System (ADS)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.

  16. The measurement and prediction of proton upset

    NASA Astrophysics Data System (ADS)

    Shimano, Y.; Goka, T.; Kuboyama, S.; Kawachi, K.; Kanai, T.

    1989-12-01

    The authors evaluate tolerance to proton upset for three kinds of memories and one microprocessor unit for space use by irradiating them with high-energy protons up to nearly 70 MeV. They predict the error rates of these memories using a modified semi-empirical equation of Bendel and Petersen (1983). A two-parameter method was used instead of Bendel's one-parameter method. There is a large difference between these two methods with regard to the fitted parameters. The calculation of upset rates in orbits were carried out using these parameters and NASA AP8MAC, AP8MIC. For the 93419 RAM the result of this calculation was compared with the in-orbit data taken on the MOS-1 spacecraft. A good agreement was found between the two sets of upset-rate data.

  17. Empirical complexities in the genetic foundations of lethal mutagenesis.

    PubMed

    Bull, James J; Joyce, Paul; Gladstone, Eric; Molineux, Ian J

    2013-10-01

    From population genetics theory, elevating the mutation rate of a large population should progressively reduce average fitness. If the fitness decline is large enough, the population will go extinct in a process known as lethal mutagenesis. Lethal mutagenesis has been endorsed in the virology literature as a promising approach to viral treatment, and several in vitro studies have forced viral extinction with high doses of mutagenic drugs. Yet only one empirical study has tested the genetic models underlying lethal mutagenesis, and the theory failed on even a qualitative level. Here we provide a new level of analysis of lethal mutagenesis by developing and evaluating models specifically tailored to empirical systems that may be used to test the theory. We first quantify a bias in the estimation of a critical parameter and consider whether that bias underlies the previously observed lack of concordance between theory and experiment. We then consider a seemingly ideal protocol that avoids this bias-mutagenesis of virions-but find that it is hampered by other problems. Finally, results that reveal difficulties in the mere interpretation of mutations assayed from double-strand genomes are derived. Our analyses expose unanticipated complexities in testing the theory. Nevertheless, the previous failure of the theory to predict experimental outcomes appears to reside in evolutionary mechanisms neglected by the theory (e.g., beneficial mutations) rather than from a mismatch between the empirical setup and model assumptions. This interpretation raises the specter that naive attempts at lethal mutagenesis may augment adaptation rather than retard it.

  18. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  19. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less

  20. A scaling law for the critical current of Nb3Sn stands based on strong-coupling theory of superconductivity

    NASA Astrophysics Data System (ADS)

    Oh, Sangjun; Kim, Keeman

    2006-02-01

    We study the transition temperature Tc, the thermodynamic critical field Bc, and the upper critical field Bc2 of Nb3Sn with Eliashberg theory of strongly coupled superconductors using the Einstein spectrum α2(ω)F(ω)=λ<ω2>1/2δ(ω-<ω2>1/2). The strain dependences of λ(ɛ) and <ω2>1/2(V) are introduced from the empirical strain dependence of Tc(V) for three model cases. It is found that the empirical relation Tc(V)/Tc(0)=[Bc2(4.2 K,V)/Bc2(4.2 K,0)]1/w (w~3) is mainly due to the low-energy-phonon mode softening. We derive analytic expressions for the strain and temperature dependences of Bc(T,V) and Bc2(T,V) and the Ginzburg-Landau parameter κ(T,V) from the numerical calculation results. The Summers refinement on the temperature dependence of κ(T) shows deviation from our calculation results. We propose a unified scaling law of flux pinning in Nb3Sn strands in the form of the Kramer model with the analytic expressions of Bc2(T,V) and κ(T,V) derived in this work. It is shown that the proposed scaling law gives a reasonable fit to the reported data with only eight fitting parameters.

  1. Tap density equations of granular powders based on the rate process theory and the free volume concept.

    PubMed

    Hao, Tian

    2015-02-28

    The tap density of a granular powder is often linked to the flowability via the Carr index that measures how tight a powder can be packed, under an assumption that more easily packed powders usually flow poorly. Understanding how particles are packed is important for revealing why a powder flows better than others. There are two types of empirical equations that were proposed to fit the experimental data of packing fractions vs. numbers of taps in the literature: the inverse logarithmic and the stretched exponential. Using the rate process theory and the free volume concept under the assumption that particles will obey similar thermodynamic laws during the tapping process if the "granular temperature" is defined in a different way, we obtain the tap density equations, and they are reducible to the two empirical equations currently widely used in literature. Our equations could potentially fit experimental data better with an additional adjustable parameter. The tapping amplitude and frequency, the weight of the granular materials, and the environmental temperature are grouped into this parameter that weighs the pace of the packing process. The current results, in conjunction with our previous findings, may imply that both "dry" (granular) and "wet" (colloidal and polymeric) particle systems are governed by the same physical mechanisms in term of the role of the free volume and how particles behave (a rate controlled process).

  2. Estimating procedure times for surgeries by determining location parameters for the lognormal model.

    PubMed

    Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H

    2004-05-01

    We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.

  3. Skyshine line-beam response functions for 20- to 100-MeV photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockhoff, R.C.; Shultis, J.K.; Faw, R.E.

    1996-06-01

    The line-beam response function, needed for skyshine analyses based on the integral line-beam method, was evaluated with the MCNP Monte Carlo code for photon energies from 20 to 100 MeV and for source-to-detector distances out to 1,000 m. These results are compared with point-kernel results, and the effects of bremsstrahlung and positron transport in the air are found to be important in this energy range. The three-parameter empirical formula used in the integral line-beam skyshine method was fit to the MCNP results, and values of these parameters are reported for various source energies and angles.

  4. Empirical Bolometric Fluxes and Angular Diameters of 1.6 Million Tycho-2 Stars and Radii of 350,000 Stars with Gaia DR1 Parallaxes

    NASA Astrophysics Data System (ADS)

    Stevens, Daniel J.; Stassun, Keivan G.; Gaudi, B. Scott

    2017-12-01

    We present bolometric fluxes and angular diameters for over 1.6 million stars in the Tycho-2 catalog, determined using previously determined empirical color-temperature and color-flux relations. We vet these relations via full fits to the full broadband spectral energy distributions for a subset of benchmark stars and perform quality checks against the large set of stars for which spectroscopically determined parameters are available from LAMOST, RAVE, and/or APOGEE. We then estimate radii for the 355,502 Tycho-2 stars in our sample whose Gaia DR1 parallaxes are precise to ≲ 10 % . For these stars, we achieve effective temperature, bolometric flux, and angular diameter uncertainties of the order of 1%-2% and radius uncertainties of order 8%, and we explore the effect that imposing spectroscopic effective temperature priors has on these uncertainties. These stellar parameters are shown to be reliable for stars with {T}{eff} ≲ 7000 K. The over half a million bolometric fluxes and angular diameters presented here will serve as an immediate trove of empirical stellar radii with the Gaia second data release, at which point effective temperature uncertainties will dominate the radius uncertainties. Already, dwarf, subgiant, and giant populations are readily identifiable in our purely empirical luminosity-effective temperature (theoretical) Hertzsprung-Russell diagrams.

  5. A chi-square goodness-of-fit test for non-identically distributed random variables: with application to empirical Bayes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, W.J.; Cox, D.D.; Martz, H.F.

    1997-12-01

    When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less

  6. Effect of Small Numbers of Test Results on Accuracy of Hoek-Brown Strength Parameter Estimations: A Statistical Simulation Study

    NASA Astrophysics Data System (ADS)

    Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.

    2017-12-01

    The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.

  7. PROFIT: Bayesian profile fitting of galaxy images

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Taranu, D. S.; Tobar, R.; Moffett, A.; Driver, S. P.

    2017-04-01

    We present PROFIT, a new code for Bayesian two-dimensional photometric galaxy profile modelling. PROFIT consists of a low-level C++ library (libprofit), accessible via a command-line interface and documented API, along with high-level R (PROFIT) and PYTHON (PyProFit) interfaces (available at github.com/ICRAR/libprofit, github.com/ICRAR/ProFit, and github.com/ICRAR/pyprofit, respectively). R PROFIT is also available pre-built from CRAN; however, this version will be slightly behind the latest GitHub version. libprofit offers fast and accurate two-dimensional integration for a useful number of profiles, including Sérsic, Core-Sérsic, broken-exponential, Ferrer, Moffat, empirical King, point-source, and sky, with a simple mechanism for adding new profiles. We show detailed comparisons between libprofit and GALFIT. libprofit is both faster and more accurate than GALFIT at integrating the ubiquitous Sérsic profile for the most common values of the Sérsic index n (0.5 < n < 8). The high-level fitting code PROFIT is tested on a sample of galaxies with both SDSS and deeper KiDS imaging. We find good agreement in the fit parameters, with larger scatter in best-fitting parameters from fitting images from different sources (SDSS versus KiDS) than from using different codes (PROFIT versus GALFIT). A large suite of Monte Carlo-simulated images are used to assess prospects for automated bulge-disc decomposition with PROFIT on SDSS, KiDS, and future LSST imaging. We find that the biggest increases in fit quality come from moving from SDSS- to KiDS-quality data, with less significant gains moving from KiDS to LSST.

  8. Time series modeling and forecasting using memetic algorithms for regime-switching models.

    PubMed

    Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel

    2012-11-01

    In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.

  9. A fitting empirical potential for NiTi alloy and its application

    NASA Astrophysics Data System (ADS)

    Ren, Guowu; Tang, Tiegang; Sehitoglu, Huseyin

    Due to its superelastic behavior, NiTi shape memory alloy receives considerable attentions over a wide range of industrial and commercial applications. Limited to its complex structural transformation and multiple variants, semiempirical potentials for performing large-scale molecular dynamics simulations to investigate the atomistic mechanical process, are very few. In this work, we construct a new interatomic potential for the NiTi alloy by fitting to experimental or ab initio data. The fitting potential correctly predicts the lattice parameter, structural stability, equation of state for cubic B2(austenite) and monoclinic B19'(martensite) phases. In particular the elastic properties(three elastic constants for B2 and thirteen ones for B19') are in satisfactory agreement with the experiments or ab initio calculations. Furthermore, we apply this potential to conduct the molecular dynamics simulations of the mechanical behavior for NiTi alloy and the results capture its reversible transformation.

  10. Water sorption equilibria and kinetics of henna leaves

    NASA Astrophysics Data System (ADS)

    Sghaier, Khamsa; Peczalski, Roman; Bagane, Mohamed

    2018-05-01

    In this work, firstly the sorption isotherms of henna leaves were determined using a dynamic vapor sorption ( DVS) device at 3 temperatures (30, 40, 50 °C). The equilibrium data were well fitted by the GAB model. Secondly, drying kinetics were measured using a pilot convective dryer for 3 air temperatures (same as above), 3 velocities (0.5, 1, 1.42 m/s) and 4 relative humidities (20, 30, 35, 40%). The drying kinetic coefficients were identified by fitting the DVS and pilot dryer data by Lewis semi-empirical model. In order to compare the obtained kinetic parameters with literature, the water diffusivities were also identified by fitting the data by the simplified solution of fickian diffusion equation. The identified kinetic coefficient was mainly dependent on air temperature and velocity what proved that it represented rather the external transfer and not the internal one.

  11. libprofit: Image creation from luminosity profiles

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Taranu, D.; Tobar, R.

    2016-12-01

    libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).

  12. GPS-Based Reduced Dynamic Orbit Determination Using Accelerometer Data

    NASA Technical Reports Server (NTRS)

    VanHelleputte, Tom; Visser, Pieter

    2007-01-01

    Currently two gravity field satellite missions, CHAMP and GRACE, are equipped with high sensitivity electrostatic accelerometers, measuring the non-conservative forces acting on the spacecraft in three orthogonal directions. During the gravity field recovery these measurements help to separate gravitational and non-gravitational contributions in the observed orbit perturbations. For precise orbit determination purposes all these missions have a dual-frequency GPS receiver on board. The reduced dynamic technique combines the dense and accurate GPS observations with physical models of the forces acting on the spacecraft, complemented by empirical accelerations, which are stochastic parameters adjusted in the orbit determination process. When the spacecraft carries an accelerometer, these measured accelerations can be used to replace the models of the non-conservative forces, such as air drag and solar radiation pressure. This approach is implemented in a batch least-squares estimator of the GPS High Precision Orbit Determination Software Tools (GHOST), developed at DLR/GSOC and DEOS. It is extensively tested with data of the CHAMP and GRACE satellites. As accelerometer observations typically can be affected by an unknown scale factor and bias in each measurement direction, they require calibration during processing. Therefore the estimated state vector is augmented with six parameters: a scale and bias factor for the three axes. In order to converge efficiently to a good solution, reasonable a priori values for the bias factor are necessary. These are calculated by combining the mean value of the accelerometer observations with the mean value of the non-conservative force models and empirical accelerations, estimated when using these models. When replacing the non-conservative force models with accelerometer observations and still estimating empirical accelerations, a good orbit precision is achieved. 100 days of GRACE B data processing results in a mean orbit fit of a few centimeters with respect to high-quality JPL reference orbits. This shows a slightly better consistency compared to the case when using force models. A purely dynamic orbit, without estimating empirical accelerations thus only adjusting six state parameters and the bias and scale factors, gives an orbit fit for the GRACE B test case below the decimeter level. The in orbit calibrated accelerometer observations can be used to validate the modelled accelerations and estimated empirical accelerations computed with the GHOST tools. In along track direction they show the best resemblance, with a mean correlation coefficient of 93% for the same period. In radial and normal direction the correlation is smaller. During days of high solar activity the benefit of using accelerometer observations is clearly visible. The observations during these days show fluctuations which the modelled and empirical accelerations can not follow.

  13. Transient photoresponse in amorphous In-Ga-Zn-O thin films under stretched exponential analysis

    NASA Astrophysics Data System (ADS)

    Luo, Jiajun; Adler, Alexander U.; Mason, Thomas O.; Bruce Buchholz, D.; Chang, R. P. H.; Grayson, M.

    2013-04-01

    We investigated transient photoresponse and Hall effect in amorphous In-Ga-Zn-O thin films and observed a stretched exponential response which allows characterization of the activation energy spectrum with only three fit parameters. Measurements of as-grown films and 350 K annealed films were conducted at room temperature by recording conductivity, carrier density, and mobility over day-long time scales, both under illumination and in the dark. Hall measurements verify approximately constant mobility, even as the photoinduced carrier density changes by orders of magnitude. The transient photoconductivity data fit well to a stretched exponential during both illumination and dark relaxation, but with slower response in the dark. The inverse Laplace transforms of these stretched exponentials yield the density of activation energies responsible for transient photoconductivity. An empirical equation is introduced, which determines the linewidth of the activation energy band from the stretched exponential parameter β. Dry annealing at 350 K is observed to slow the transient photoresponse.

  14. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    PubMed

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  15. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    PubMed Central

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699

  16. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  17. Genotypic Complexity of Fisher’s Geometric Model

    PubMed Central

    Hwang, Sungmin; Park, Su-Chan; Krug, Joachim

    2017-01-01

    Fisher’s geometric model was originally introduced to argue that complex adaptations must occur in small steps because of pleiotropic constraints. When supplemented with the assumption of additivity of mutational effects on phenotypic traits, it provides a simple mechanism for the emergence of genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of particular interest is the occurrence of reciprocal sign epistasis, which is a necessary condition for multipeaked genotypic fitness landscapes. Here we compute the probability that a pair of randomly chosen mutations interacts sign epistatically, which is found to decrease with increasing phenotypic dimension n, and varies nonmonotonically with the distance from the phenotypic optimum. We then derive expressions for the mean number of fitness maxima in genotypic landscapes comprised of all combinations of L random mutations. This number increases exponentially with L, and the corresponding growth rate is used as a measure of the complexity of the landscape. The dependence of the complexity on the model parameters is found to be surprisingly rich, and three distinct phases characterized by different landscape structures are identified. Our analysis shows that the phenotypic dimension, which is often referred to as phenotypic complexity, does not generally correlate with the complexity of fitness landscapes and that even organisms with a single phenotypic trait can have complex landscapes. Our results further inform the interpretation of experiments where the parameters of Fisher’s model have been inferred from data, and help to elucidate which features of empirical fitness landscapes can be described by this model. PMID:28450460

  18. Thermal Conductivity of Metallic Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hin, Celine

    This project has developed a modeling and simulation approaches to predict the thermal conductivity of metallic fuels and their alloys. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Bothmore » methods were complementary. The models incorporated both phonon and electron contributions. Good agreement with experimental data over a wide temperature range were found. The models also provided insight into the different physical factors that govern the thermal conductivity under different temperatures. The models were general enough to incorporate more complex effects like additional alloying species, defects, transmutation products and noble gas bubbles to predict the behavior of complex metallic alloys like U-alloy fuel systems under burnup. 3 Introduction Thermal conductivity is an important thermal physical property affecting the performance and efficiency of metallic fuels [1]. Some experimental measurement of thermal conductivity and its correlation with composition and temperature from empirical fitting are available for U, Zr and their alloys with Pu and other minor actinides. However, as reviewed in by Kim, Cho and Sohn [2], due to the difficulty in doing experiments on actinide materials, thermal conductivities of metallic fuels have only been measured at limited alloy compositions and temperatures, some of them even being negative and unphysical. Furthermore, the correlations developed so far are empirical in nature and may not be accurate when used for prediction at conditions far from those used in the original fitting. Moreover, as fuels burn up in the reactor and fission products are built up, thermal conductivity is also significantly changed [3]. Unfortunately, fundamental understanding of the effect of fission products is also currently lacking. In this project, we probe thermal conductivity of metallic fuels with ab initio calculations, a theoretical tool with the potential to yield better accuracy and predictive power than empirical fitting. This work will both complement experimental data by determining thermal conductivity in wider composition and temperature ranges than is available experimentally, and also develop mechanistic understanding to guide better design of metallic fuels in the future. So far, we focused on α-U perfect crystal, the ground-state phase of U metal. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Both methods were complementary and very helpful to understand the physics behind the thermal conductivity in metallic uranium and other materials with similar characteristics. In Section I, the combined model developed at UWM is explained. In Section II, the ab-initio method developed at VT is described along with the uranium pseudo-potential and its validation. Section III is devoted to the work done by Jianguo Yu at INL. Finally, we will present the performance of the project in terms of milestones, publications, and presentations.« less

  19. On application of asymmetric Kan-like exact equilibria to the Earth magnetotail modeling

    NASA Astrophysics Data System (ADS)

    Korovinskiy, Daniil B.; Kubyshkina, Darya I.; Semenov, Vladimir S.; Kubyshkina, Marina V.; Erkaev, Nikolai V.; Kiehas, Stefan A.

    2018-04-01

    A specific class of solutions of the Vlasov-Maxwell equations, developed by means of generalization of the well-known Harris-Fadeev-Kan-Manankova family of exact two-dimensional equilibria, is studied. The examined model reproduces the current sheet bending and shifting in the vertical plane, arising from the Earth dipole tilting and the solar wind nonradial propagation. The generalized model allows magnetic configurations with equatorial magnetic fields decreasing in a tailward direction as slow as 1/x, contrary to the original Kan model (1/x3); magnetic configurations with a single X point are also available. The analytical solution is compared with the empirical T96 model in terms of the magnetic flux tube volume. It is found that parameters of the analytical model may be adjusted to fit a wide range of averaged magnetotail configurations. The best agreement between analytical and empirical models is obtained for the midtail at distances beyond 10-15 RE at high levels of magnetospheric activity. The essential model parameters (current sheet scale, current density) are compared to Cluster data of magnetotail crossings. The best match of parameters is found for single-peaked current sheets with medium values of number density, proton temperature and drift velocity.

  20. Parameter estimation and order selection for an empirical model of VO2 on-kinetics.

    PubMed

    Alata, O; Bernard, O

    2007-04-27

    In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.

  1. The Derivation of Sink Functions of Wheat Organs using the GREENLAB Model

    PubMed Central

    Kang, Mengzhen; Evers, Jochem B.; Vos, Jan; de Reffye, Philippe

    2008-01-01

    Background and Aims In traditional crop growth models assimilate production and partitioning are described with empirical equations. In the GREENLAB functional–structural model, however, allocation of carbon to different kinds of organs depends on the number and relative sink strengths of growing organs present in the crop architecture. The aim of this study is to generate sink functions of wheat (Triticum aestivum) organs by calibrating the GREENLAB model using a dedicated data set, consisting of time series on the mass of individual organs (the ‘target data’). Methods An experiment was conducted on spring wheat (Triticum aestivum, ‘Minaret’), in a growth chamber from, 2004 to, 2005. Four harvests were made of six plants each to determine the size and mass of individual organs, including the root system, leaf blades, sheaths, internodes and ears of the main stem and different tillers. Leaf status (appearance, expansion, maturity and death) of these 24 plants was recorded. With the structures and mass of organs of four individual sample plants, the GREENLAB model was calibrated using a non-linear least-square-root fitting method, the aim of which was to minimize the difference in mass of the organs between measured data and model output, and to provide the parameter values of the model (the sink strengths of organs of each type, age and tiller order, and two empirical parameters linked to biomass production). Key Results and Conclusions The masses of all measured organs from one plant from each harvest were fitted simultaneously. With estimated parameters for sink and source functions, the model predicted the mass and size of individual organs at each position of the wheat structure in a mechanistic way. In addition, there was close agreement between experimentally observed and simulated values of leaf area index. PMID:18045794

  2. An Empirical Fitting Method to Type Ia Supernova Light Curves. III. A Three-parameter Relationship: Peak Magnitude, Rise Time, and Photospheric Velocity

    NASA Astrophysics Data System (ADS)

    Zheng, WeiKang; Kelly, Patrick L.; Filippenko, Alexei V.

    2018-05-01

    We examine the relationship between three parameters of Type Ia supernovae (SNe Ia): peak magnitude, rise time, and photospheric velocity at the time of peak brightness. The peak magnitude is corrected for extinction using an estimate determined from MLCS2k2 fitting. The rise time is measured from the well-observed B-band light curve with the first detection at least 1 mag fainter than the peak magnitude, and the photospheric velocity is measured from the strong absorption feature of Si II λ6355 at the time of peak brightness. We model the relationship among these three parameters using an expanding fireball with two assumptions: (a) the optical emission is approximately that of a blackbody, and (b) the photospheric temperatures of all SNe Ia are the same at the time of peak brightness. We compare the precision of the distance residuals inferred using this physically motivated model against those from the empirical Phillips relation and the MLCS2k2 method for 47 low-redshift SNe Ia (0.005 < z < 0.04) and find comparable scatter. However, SNe Ia in our sample with higher velocities are inferred to be intrinsically fainter. Eliminating the high-velocity SNe and applying a more stringent extinction cut to obtain a “low-v golden sample” of 22 SNe, we obtain significantly reduced scatter of 0.108 ± 0.018 mag in the new relation, better than those of the Phillips relation and the MLCS2k2 method. For 250 km s‑1 of residual peculiar motions, we find 68% and 95% upper limits on the intrinsic scatter of 0.07 and 0.10 mag, respectively.

  3. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  4. Evolution of haploid-diploid life cycles when haploid and diploid fitnesses are not equal.

    PubMed

    Scott, Michael F; Rescan, Marie

    2017-02-01

    Many organisms spend a significant portion of their life cycle as haploids and as diploids (a haploid-diploid life cycle). However, the evolutionary processes that could maintain this sort of life cycle are unclear. Most previous models of ploidy evolution have assumed that the fitness effects of new mutations are equal in haploids and homozygous diploids, however, this equivalency is not supported by empirical data. With different mutational effects, the overall (intrinsic) fitness of a haploid would not be equal to that of a diploid after a series of substitution events. Intrinsic fitness differences between haploids and diploids can also arise directly, for example because diploids tend to have larger cell sizes than haploids. Here, we incorporate intrinsic fitness differences into genetic models for the evolution of time spent in the haploid versus diploid phases, in which ploidy affects whether new mutations are masked. Life-cycle evolution can be affected by intrinsic fitness differences between phases, the masking of mutations, or a combination of both. We find parameter ranges where these two selective forces act and show that the balance between them can favor convergence on a haploid-diploid life cycle, which is not observed in the absence of intrinsic fitness differences. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  5. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD).

    PubMed

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-07

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.

  6. Near transferable phenomenological n-body potentials for noble metals

    NASA Astrophysics Data System (ADS)

    Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David

    2017-09-01

    We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.

  7. Near transferable phenomenological n-body potentials for noble metals.

    PubMed

    Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David

    2017-09-06

    We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.

  8. Empirical mass-loss rates for 25 O and early B stars, derived from Copernicus observations

    NASA Technical Reports Server (NTRS)

    Gathier, R.; Lamers, H. J. G. L. M.; Snow, T. P.

    1981-01-01

    Ultraviolet line profiles are fitted with theoretical line profiles in the cases of 25 stars covering a spectral type range from O4 to B1, including all luminosity classes. Ion column densities are compared for the determination of wind ionization, and it is found that the O VI/N V ratio is dependent on the mean density of the wind and not on effective temperature value, while the Si IV/N V ratio is temperature-dependent. The column densities are used to derive a mass-loss rate parameter that is empirically correlated against the mass-loss rate by means of standard stars with well-determined rates from IR or radio data. The empirical mass-loss rates obtained are compared with those derived by others and found to vary by as much as a factor of 10, which is shown to be due to uncertainties or errors in the ionization fractions of models used for wind ionization balance prediction.

  9. Semi empirical formula for exposure buildup factors

    NASA Astrophysics Data System (ADS)

    Seenappa, L.; Manjunatha, H. C.; Sridhar, K. N.; Hanumantharayappa, Chikka

    2017-10-01

    The nuclear data of photon buildup factor is an important concept that must be considered in nuclear safety aspects such as radiation shielding and dosimetry. The buildup factor is a coefficient that represents the contribution of collided photons with the target medium. Present work formulated a semi empirical formulae for exposure buildup factors (EBF) in the energy region 0.015-15 MeV, atomic number range 1 ≤ Z ≤ 92 and for mean free path up to 40 mfp. The EBFs produced by the present formula are compared with that of data available in the literature. It is found that present work agree with literature. This formula is first of its kind to calculate EBFs without using geometric progression fitting parameters. This formula may also use to calculate EBFs for compounds/mixtures/Biological samples. The present formula is useful in producing EBFs for elements and mixtures quickly. This semi empirical formula finds importance in the calculations of EBFs which intern helps in the radiation protection and dosimetry.

  10. Wavelet-bounded empirical mode decomposition for measured time series analysis

    NASA Astrophysics Data System (ADS)

    Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2018-01-01

    Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.

  11. The Dynamic Characteristic and Hysteresis Effect of an Air Spring

    NASA Astrophysics Data System (ADS)

    Löcken, F.; Welsch, M.

    2015-02-01

    In many applications of vibration technology, especially in chassis, air springs present a common alternative to steel spring concepts. A design-independent and therefore universal approach is presented to describe the dynamic characteristic of such springs. Differential and constitutive equations based on energy balances of the enclosed volume and the mountings are given to describe the nonlinear and dynamic characteristics. Therefore all parameters can be estimated directly from physical and geometrical properties, without parameter fitting. The numerically solved equations fit very well to measurements of a passenger car air spring. In a second step a simplification of this model leads to a pure mechanical equation. While in principle the same parameters are used, just an empirical correction of the effective heat transfer coefficient is needed to handle some simplification on this topic. Finally, a linearization of this equation leads to an analogous mechanical model that can be assembled from two common spring- and one dashpot elements in a specific arrangement. This transfer into "mechanical language" enables a system description with a simple force-displacement law and a consideration of the nonobvious hysteresis and stiffness increase of an air spring from a mechanical point of view.

  12. Obtaining short-fiber orientation model parameters using non-lubricated squeeze flow

    NASA Astrophysics Data System (ADS)

    Lambert, Gregory; Wapperom, Peter; Baird, Donald

    2017-12-01

    Accurate models of fiber orientation dynamics during the processing of polymer-fiber composites are needed for the design work behind important automobile parts. All of the existing models utilize empirical parameters, but a standard method for obtaining them independent of processing does not exist. This study considers non-lubricated squeeze flow through a rectangular channel as a solution. A two-dimensional finite element method simulation of the kinematics and fiber orientation evolution along the centerline of a sample is developed as a first step toward a fully three-dimensional simulation. The model is used to fit to orientation data in a short-fiber-reinforced polymer composite after squeezing. Fiber orientation model parameters obtained in this study do not agree well with those obtained for the same material during startup of simple shear. This is attributed to the vastly different rates at which fibers orient during shearing and extensional flows. A stress model is also used to try to fit to experimental closure force data. Although the model can be tuned to the correct magnitude of the closure force, it does not fully recreate the transient behavior, which is attributed to the lack of any consideration for fiber-fiber interactions.

  13. Fitting the curve in Excel®: Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NASA Astrophysics Data System (ADS)

    McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.

    2017-03-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.

  14. Analysis of Ion Composition Estimation Accuracy for Incoherent Scatter Radars

    NASA Astrophysics Data System (ADS)

    Martínez Ledesma, M.; Diaz, M. A.

    2017-12-01

    The Incoherent Scatter Radar (ISR) is one of the most powerful sounding methods developed to estimate the Ionosphere. This radar system determines the plasma parameters by sending powerful electromagnetic pulses to the Ionosphere and analyzing the received backscatter. This analysis provides information about parameters such as electron and ion temperatures, electron densities, ion composition, and ion drift velocities. Nevertheless in some cases the ISR analysis has ambiguities in the determination of the plasma characteristics. It is of particular relevance the ion composition and temperature ambiguity obtained between the F1 and the lower F2 layers. In this case very similar signals are obtained with different mixtures of molecular ions (NO2+ and O2+) and atomic oxygen ions (O+), and consequently it is not possible to completely discriminate between them. The most common solution to solve this problem is the use of empirical or theoretical models of the ionosphere in the fitting of ambiguous data. More recent works take use of parameters estimated from the Plasma Line band of the radar to reduce the number of parameters to determine. In this work we propose to determine the error estimation of the ion composition ambiguity when using Plasma Line electron density measurements. The sensibility of the ion composition estimation has been also calculated depending on the accuracy of the ionospheric model, showing that the correct estimation is highly dependent on the capacity of the model to approximate the real values. Monte Carlo simulations of data fitting at different signal to noise (SNR) ratios have been done to obtain valid and invalid estimation probability curves. This analysis provides a method to determine the probability of erroneous estimation for different signal fluctuations. Also it can be used as an empirical method to compare the efficiency of the different algorithms and methods on when solving the ion composition ambiguity.

  15. Simulating and analyzing engineering parameters of Kyushu Earthquake, Japan, 1997, by empirical Green function method

    NASA Astrophysics Data System (ADS)

    Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei

    2017-03-01

    Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.

  16. Hydrograph Predictions of Glacial Lake Outburst Floods From an Ice-Dammed Lake

    NASA Astrophysics Data System (ADS)

    McCoy, S. W.; Jacquet, J.; McGrath, D.; Koschitzki, R.; Okuinghttons, J.

    2017-12-01

    Understanding the time evolution of glacial lake outburst floods (GLOFs), and ultimately predicting peak discharge, is crucial to mitigating the impacts of GLOFs on downstream communities and understanding concomitant surface change. The dearth of in situ measurements taken during GLOFs has left many GLOF models currently in use untested. Here we present a dataset of 13 GLOFs from Lago Cachet Dos, Aysen Region, Chile in which we detail measurements of key environmental variables (total volume drained, lake temperature, and lake inflow rate) and high temporal resolution discharge measurements at the source lake, in addition to well-constrained ice thickness and bedrock topography. Using this dataset we test two common empirical equations as well as the physically-based model of Spring-Hutter-Clarke. We find that the commonly used empirical relationships based solely on a dataset of lake volume drained fail to predict the large variability in observed peak discharges from Lago Cachet Dos. This disagreement is likely because these equations do not consider additional environmental variables that we show also control peak discharge, primarily, lake water temperature and the rate of meltwater inflow to the source lake. We find that the Spring-Hutter-Clarke model can accurately simulate the exponentially rising hydrographs that are characteristic of ice-dammed GLOFs, as well as the order of magnitude variation in peak discharge between events if the hydraulic roughness parameter is allowed to be a free fitting parameter. However, the Spring-Hutter-Clarke model over predicts peak discharge in all cases by 10 to 35%. The systematic over prediction of peak discharge by the model is related to its abrupt flood termination that misses the observed steep falling limb of the flood hydrograph. Although satisfactory model fits are produced, the range in hydraulic roughness required to obtain these fits across all events was large, which suggests that current models do not completely capture the physics of these systems, thus limiting their ability to truly predict peak discharges using only independently constrained parameters. We suggest what some of these missing physics might be.

  17. Empirical source noise prediction method with application to subsonic coaxial jet mixing noise

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.; Weir, D. S.

    1982-01-01

    A general empirical method, developed for source noise predictions, uses tensor splines to represent the dependence of the acoustic field on frequency and direction and Taylor's series to represent the dependence on source state parameters. The method is applied to prediction of mixing noise from subsonic circular and coaxial jets. A noise data base of 1/3-octave-band sound pressure levels (SPL's) from 540 tests was gathered from three countries: United States, United Kingdom, and France. The SPL's depend on seven variables: frequency, polar direction angle, and five source state parameters: inner and outer nozzle pressure ratios, inner and outer stream total temperatures, and nozzle area ratio. A least-squares seven-dimensional curve fit defines a table of constants which is used for the prediction method. The resulting prediction has a mean error of 0 dB and a standard deviation of 1.2 dB. The prediction method is used to search for a coaxial jet which has the greatest coaxial noise benefit as compared with an equivalent single jet. It is found that benefits of about 6 dB are possible.

  18. An empirical propellant response function for combustion stability predictions

    NASA Technical Reports Server (NTRS)

    Hessler, R. O.

    1980-01-01

    An empirical response function model was developed for ammonium perchlorate propellants to supplant T-burner testing at the preliminary design stage. The model was developed by fitting a limited T-burner data base, in terms of oxidizer size and concentration, to an analytical two parameter response function expression. Multiple peaks are predicted, but the primary effect is of a single peak for most formulations, with notable bulges for the various AP size fractions. The model was extended to velocity coupling with the assumption that dynamic response was controlled primarily by the solid phase described by the two parameter model. The magnitude of velocity coupling was then scaled using an erosive burning law. Routine use of the model for stability predictions on a number of propulsion units indicates that the model tends to overpredict propellant response. It is concluded that the model represents a generally conservative prediction tool, suited especially for the preliminary design stage when T-burner data may not be readily available. The model work included development of a rigorous summation technique for pseudopropellant properties and of a concept for modeling ordered packing of particulates.

  19. The AKARI IRC asteroid flux catalogue: updated diameters and albedos

    NASA Astrophysics Data System (ADS)

    Alí-Lagoa, V.; Müller, T. G.; Usui, F.; Hasegawa, S.

    2018-05-01

    The AKARI IRC all-sky survey provided more than twenty thousand thermal infrared observations of over five thousand asteroids. Diameters and albedos were obtained by fitting an empirically calibrated version of the standard thermal model to these data. After the publication of the flux catalogue in October 2016, our aim here is to present the AKARI IRC all-sky survey data and discuss valuable scientific applications in the field of small body physical properties studies. As an example, we update the catalogue of asteroid diameters and albedos based on AKARI using the near-Earth asteroid thermal model (NEATM). We fit the NEATM to derive asteroid diameters and, whenever possible, infrared beaming parameters. We fit groups of observations taken for the same object at different epochs of the survey separately, so we compute more than one diameter for approximately half of the catalogue. We obtained a total of 8097 diameters and albedos for 5170 asteroids, and we fitted the beaming parameter for almost two thousand of them. When it was not possible to fit the beaming parameter, we used a straight line fit to our sample's beaming parameter-versus-phase angle plot to set the default value for each fit individually instead of using a single average value. Our diameters agree with stellar-occultation-based diameters well within the accuracy expected for the model. They also match the previous AKARI-based catalogue at phase angles lower than 50°, but we find a systematic deviation at higher phase angles, at which near-Earth and Mars-crossing asteroids were observed. The AKARI IRC All-sky survey is an essential source of information about asteroids, especially the large ones, since, it provides observations at different observation geometries, rotational coverages and aspect angles. For example, by comparing in more detail a few asteroids for which dimensions were derived from occultations, we discuss how the multiple observations per object may already provide three-dimensional information about elongated objects even based on an idealised model like the NEATM. Finally, we enumerate additional expected applications for more complex models, especially in combination with other catalogues. Full Table 1 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A85

  20. Computer modeling of electrical performance of detonators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furnberg, C.M.; Peevy, G.R.; Brigham, W.P.

    1995-05-01

    An empirical model of detonator electrical performance which describes the resistance of the exploding bridgewire (EBW) or exploding foil initiator (EFI or slapper) as a function of energy, deposition will be described. This model features many parameters that can be adjusted to obtain a close fit to experimental data. This has been demonstrated using recent experimental data taken with the cable discharge system located at Sandia National Laboratories. This paper will be a continuation of the paper entitled ``Cable Discharge System for Fundamental Detonator Studies`` presented at the 2nd NASA/DOD/DOE Pyrotechnic Workshop.

  1. AAA gunnermodel based on observer theory. [predicting a gunner's tracking response

    NASA Technical Reports Server (NTRS)

    Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.

    1978-01-01

    The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.

  2. Linking aquifer spatial properties and non-Fickian transport in mobile-immobile like alluvial settings

    USGS Publications Warehouse

    Zhang, Yong; Green, Christopher T.; Baeumer, Boris

    2014-01-01

    Time-nonlocal transport models can describe non-Fickian diffusion observed in geological media, but the physical meaning of parameters can be ambiguous, and most applications are limited to curve-fitting. This study explores methods for predicting the parameters of a temporally tempered Lévy motion (TTLM) model for transient sub-diffusion in mobile–immobile like alluvial settings represented by high-resolution hydrofacies models. The TTLM model is a concise multi-rate mass transfer (MRMT) model that describes a linear mass transfer process where the transfer kinetics and late-time transport behavior are controlled by properties of the host medium, especially the immobile domain. The intrinsic connection between the MRMT and TTLM models helps to estimate the main time-nonlocal parameters in the TTLM model (which are the time scale index, the capacity coefficient, and the truncation parameter) either semi-analytically or empirically from the measurable aquifer properties. Further applications show that the TTLM model captures the observed solute snapshots, the breakthrough curves, and the spatial moments of plumes up to the fourth order. Most importantly, the a priori estimation of the time-nonlocal parameters outside of any breakthrough fitting procedure provides a reliable “blind” prediction of the late-time dynamics of subdiffusion observed in a spectrum of alluvial settings. Predictability of the time-nonlocal parameters may be due to the fact that the late-time subdiffusion is not affected by the exact location of each immobile zone, but rather is controlled by the time spent in immobile blocks surrounding the pathway of solute particles. Results also show that the effective dispersion coefficient has to be fitted due to the scale effect of transport, and the mean velocity can differ from local measurements or volume averages. The link between medium heterogeneity and time-nonlocal parameters will help to improve model predictability for non-Fickian transport in alluvial settings.

  3. A simple empirical model for the clarification-thickening process in wastewater treatment plants.

    PubMed

    Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T

    2015-01-01

    In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p < 0.001). The model requires application of only one parameter, the sludge volume index (SVI), which is readily incorporated into routine analysis. By combining this model with the conservation of mass equation, an empirical model for compression settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.

  4. Electron momentum density and Compton profile by a semi-empirical approach

    NASA Astrophysics Data System (ADS)

    Aguiar, Julio C.; Mitnik, Darío; Di Rocco, Héctor O.

    2015-08-01

    Here we propose a semi-empirical approach to describe with good accuracy the electron momentum densities and Compton profiles for a wide range of pure crystalline metals. In the present approach, we use an experimental Compton profile to fit an analytical expression for the momentum densities of the valence electrons. This expression is similar to a Fermi-Dirac distribution function with two parameters, one of which coincides with the ground state kinetic energy of the free-electron gas and the other resembles the electron-electron interaction energy. In the proposed scheme conduction electrons are neither completely free nor completely bound to the atomic nucleus. This procedure allows us to include correlation effects. We tested the approach for all metals with Z=3-50 and showed the results for three representative elements: Li, Be and Al from high-resolution experiments.

  5. A systematic study of multiple minerals precipitation modelling in wastewater treatment.

    PubMed

    Kazadi Mbamba, Christian; Tait, Stephan; Flores-Alsina, Xavier; Batstone, Damien J

    2015-11-15

    Mineral solids precipitation is important in wastewater treatment. However approaches to minerals precipitation modelling are varied, often empirical, and mostly focused on single precipitate classes. A common approach, applicable to multi-species precipitates, is needed to integrate into existing wastewater treatment models. The present study systematically tested a semi-mechanistic modelling approach, using various experimental platforms with multiple minerals precipitation. Experiments included dynamic titration with addition of sodium hydroxide to synthetic wastewater, and aeration to progressively increase pH and induce precipitation in real piggery digestate and sewage sludge digestate. The model approach consisted of an equilibrium part for aqueous phase reactions and a kinetic part for minerals precipitation. The model was fitted to dissolved calcium, magnesium, total inorganic carbon and phosphate. Results indicated that precipitation was dominated by the mineral struvite, forming together with varied and minor amounts of calcium phosphate and calcium carbonate. The model approach was noted to have the advantage of requiring a minimal number of fitted parameters, so the model was readily identifiable. Kinetic rate coefficients, which were statistically fitted, were generally in the range 0.35-11.6 h(-1) with confidence intervals of 10-80% relative. Confidence regions for the kinetic rate coefficients were often asymmetric with model-data residuals increasing more gradually with larger coefficient values. This suggests that a large kinetic coefficient could be used when actual measured data is lacking for a particular precipitate-matrix combination. Correlation between the kinetic rate coefficients of different minerals was low, indicating that parameter values for individual minerals could be independently fitted (keeping all other model parameters constant). Implementation was therefore relatively flexible, and would be readily expandable to include other minerals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Probability density functions for use when calculating standardised drought indices

    NASA Astrophysics Data System (ADS)

    Svensson, Cecilia; Prosdocimi, Ilaria; Hannaford, Jamie

    2015-04-01

    Time series of drought indices like the standardised precipitation index (SPI) and standardised flow index (SFI) require a statistical probability density function to be fitted to the observed (generally monthly) precipitation and river flow data. Once fitted, the quantiles are transformed to a Normal distribution with mean = 0 and standard deviation = 1. These transformed data are the SPI/SFI, which are widely used in drought studies, including for drought monitoring and early warning applications. Different distributions were fitted to rainfall and river flow data accumulated over 1, 3, 6 and 12 months for 121 catchments in the United Kingdom. These catchments represent a range of catchment characteristics in a mid-latitude climate. Both rainfall and river flow data have a lower bound at 0, as rains and flows cannot be negative. Their empirical distributions also tend to have positive skewness, and therefore the Gamma distribution has often been a natural and suitable choice for describing the data statistically. However, after transformation of the data to Normal distributions to obtain the SPIs and SFIs for the 121 catchments, the distributions are rejected in 11% and 19% of cases, respectively, by the Shapiro-Wilk test. Three-parameter distributions traditionally used in hydrological applications, such as the Pearson type 3 for rainfall and the Generalised Logistic and Generalised Extreme Value distributions for river flow, tend to make the transformed data fit better, with rejection rates of 5% or less. However, none of these three-parameter distributions have a lower bound at zero. This means that the lower tail of the fitted distribution may potentially go below zero, which would result in a lower limit to the calculated SPI and SFI values (as observations can never reach into this lower tail of the theoretical distribution). The Tweedie distribution can overcome the problems found when using either the Gamma or the above three-parameter distributions. The Tweedie is a three-parameter distribution which includes the Gamma distribution as a special case. It is bounded below at zero and has enough flexibility to fit most behaviours observed in the data. It does not always outperform the three-parameter distributions, but the rejection rates are similar. In addition, for certain parameter values the Tweedie distribution has a positive mass at zero, which means that ephemeral streams and months with zero rainfall can be modelled. It holds potential for wider application in drought studies in other climates and types of catchment.

  7. Comparative evaluation of a new lactation curve model for pasture-based Holstein-Friesian dairy cows.

    PubMed

    Adediran, S A; Ratkowsky, D A; Donaghy, D J; Malau-Aduli, A E O

    2012-09-01

    Fourteen lactation models were fitted to average and individual cow lactation data from pasture-based dairy systems in the Australian states of Victoria and Tasmania. The models included a new "log-quadratic" model, and a major objective was to evaluate and compare the performance of this model with the other models. Nine empirical and 5 mechanistic models were first fitted to average test-day milk yield of Holstein-Friesian dairy cows using the nonlinear procedure in SAS. Two additional semiparametric models were fitted using a linear model in ASReml. To investigate the influence of days to first test-day and the number of test-days, 5 of the best-fitting models were then fitted to individual cow lactation data. Model goodness of fit was evaluated using criteria such as the residual mean square, the distribution of residuals, the correlation between actual and predicted values, and the Wald-Wolfowitz runs test. Goodness of fit was similar in all but one of the models in terms of fitting average lactation but they differed in their ability to predict individual lactations. In particular, the widely used incomplete gamma model most displayed this failing. The new log-quadratic model was robust in fitting average and individual lactations, and was less affected by sampled data and more parsimonious in having only 3 parameters, each of which lends itself to biological interpretation. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Nonparametric tests for equality of psychometric functions.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2017-12-07

    Many empirical studies measure psychometric functions (curves describing how observers' performance varies with stimulus magnitude) because these functions capture the effects of experimental conditions. To assess these effects, parametric curves are often fitted to the data and comparisons are carried out by testing for equality of mean parameter estimates across conditions. This approach is parametric and, thus, vulnerable to violations of the implied assumptions. Furthermore, testing for equality of means of parameters may be misleading: Psychometric functions may vary meaningfully across conditions on an observer-by-observer basis with no effect on the mean values of the estimated parameters. Alternative approaches to assess equality of psychometric functions per se are thus needed. This paper compares three nonparametric tests that are applicable in all situations of interest: The existing generalized Mantel-Haenszel test, a generalization of the Berry-Mielke test that was developed here, and a split variant of the generalized Mantel-Haenszel test also developed here. Their statistical properties (accuracy and power) are studied via simulation and the results show that all tests are indistinguishable as to accuracy but they differ non-uniformly as to power. Empirical use of the tests is illustrated via analyses of published data sets and practical recommendations are given. The computer code in MATLAB and R to conduct these tests is available as Electronic Supplemental Material.

  9. Testing for Questionable Research Practices in a Meta-Analysis: An Example from Experimental Parapsychology.

    PubMed

    Bierman, Dick J; Spottiswoode, James P; Bijl, Aron

    2016-01-01

    We describe a method of quantifying the effect of Questionable Research Practices (QRPs) on the results of meta-analyses. As an example we simulated a meta-analysis of a controversial telepathy protocol to assess the extent to which these experimental results could be explained by QRPs. Our simulations used the same numbers of studies and trials as the original meta-analysis and the frequencies with which various QRPs were applied in the simulated experiments were based on surveys of experimental psychologists. Results of both the meta-analysis and simulations were characterized by 4 metrics, two describing the trial and mean experiment hit rates (HR) of around 31%, where 25% is expected by chance, one the correlation between sample-size and hit-rate, and one the complete P-value distribution of the database. A genetic algorithm optimized the parameters describing the QRPs, and the fitness of the simulated meta-analysis was defined as the sum of the squares of Z-scores for the 4 metrics. Assuming no anomalous effect a good fit to the empirical meta-analysis was found only by using QRPs with unrealistic parameter-values. Restricting the parameter space to ranges observed in studies of QRP occurrence, under the untested assumption that parapsychologists use comparable QRPs, the fit to the published Ganzfeld meta-analysis with no anomalous effect was poor. We allowed for a real anomalous effect, be it unidentified QRPs or a paranormal effect, where the HR ranged from 25% (chance) to 31%. With an anomalous HR of 27% the fitness became F = 1.8 (p = 0.47 where F = 0 is a perfect fit). We conclude that the very significant probability cited by the Ganzfeld meta-analysis is likely inflated by QRPs, though results are still significant (p = 0.003) with QRPs. Our study demonstrates that quantitative simulations of QRPs can assess their impact. Since meta-analyses in general might be polluted by QRPs, this method has wide applicability outside the domain of experimental parapsychology.

  10. The relative effectiveness of empirical and physical models for simulating the dense undercurrent of pyroclastic flows under different emplacement conditions

    USGS Publications Warehouse

    Ogburn, Sarah E.; Calder, Eliza S

    2017-01-01

    High concentration pyroclastic density currents (PDCs) are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but does not capture the lateral spreading in distal regions of larger-volume flows. Both models are better at reproducing the inundated area of single-pulse, valley-confined, smaller-volume flows than sustained, highly unsteady, larger-volume flows, which are often partially unchannelized. The simple rheological models of TITAN2D and VolcFlow are not able to recreate all features of these more complex flows. LAHARZ is fast to run and can give a rough approximation of inundation, but may not be appropriate for all PDCs and the designation of starting locations is difficult. The ΔH/L cone model is also very quick to run and gives reasonable approximations of runout distance, but does not inherently model flow channelization or directionality and thus unrealistically covers all interfluves. Empirically-based models like LAHARZ and ΔH/L cones can be quick, first-approximations of flow runout, provided a database of similar flows, e.g., FlowDat, is available to properly calculate coefficients or ΔH/L. For hazard assessment purposes, geophysical models like TITAN2D and VolcFlow can be useful for producing both scenario-based or probabilistic hazard maps, but must be run many times with varying input parameters. LAHARZ and ΔH/L cones can be used to produce simple modeling-based hazard maps when run with a variety of input volumes, but do not explicitly consider the probability of occurrence of different volumes. For forward modeling purposes, the ability to derive potential input parameters from global or local databases is crucial, though important input parameters for VolcFlow cannot be empirically estimated. Not only does this work provide a useful comparison of the operational aspects and behavior of various models for hazard assessment, but it also enriches conceptual understanding of the dynamics of the PDCs themselves.

  11. Determination of band structure parameters and the quasi-particle gap of CdSe quantum dots by cyclic voltammetry.

    PubMed

    Inamdar, Shaukatali N; Ingole, Pravin P; Haram, Santosh K

    2008-12-01

    Band structure parameters such as the conduction band edge, the valence band edge and the quasi-particle gap of diffusing CdSe quantum dots (Q-dots) of various sizes were determined using cyclic voltammetry. These parameters are strongly dependent on the size of the Q-dots. The results obtained from voltammetric measurements are compared to spectroscopic and theoretical data. The fit obtained to the reported calculations based on the semi-empirical pseudopotential method (SEPM)-especially in the strong size-confinement region, is the best reported so far, according to our knowledge. For the smallest CdSe Q-dots, the difference between the quasi-particle gap and the optical band gap gives the electron-hole Coulombic interaction energy (J(e1,h1)). Interband states seen in the photoluminescence spectra were verified with cyclic voltammetry measurements.

  12. Onset of the convection in a supercritical fluid.

    PubMed

    Meyer, H

    2006-01-01

    A model is proposed that leads to the scaled relation tp/tau D=Ftp(Ra-Rac) for the development of convection in a pure fluid in a Rayleigh-Bénard cell after the start of the heat current at t=0. Here tp is the time of the first maximum of the temperature drop DeltaT(t) across the fluid layer, the signature of rapidly growing convection, tau D is the diffusion relaxation time, and Rac is the critical Rayleigh number. Such a relation was first obtained empirically from experimental data. Because of the unknown perturbations in the cell that lead to convection development beyond the point of the fluid instability, the model determines tp/tau D within a multiplicative factor Psi square root Rac(HBL), the only fit parameter product. Here Rac(HBL), of the order 10(3), is the critical Rayleigh number of the hot boundary layer and Psi is a fit parameter. There is then good agreement over more than four decades of Ra-Rac between the model and the experiments on supercritical 3He at various heat currents and temperatures. The value of the parameter Psi, which phenomenologically represents the effectiveness of the perturbations, is discussed in connection with predictions by El Khouri and Carlès of the fluid instability onset time.

  13. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher’s Geometric Model?

    PubMed Central

    Blanquart, François; Bataillon, Thomas

    2016-01-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher’s model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher’s model was able to explain several statistical properties of the landscapes—including the mean and SD of selection and epistasis coefficients—it was often unable to explain the full structure of fitness landscapes. PMID:27052568

  14. Representing Micro-Macro Linkages by Actor-Based Dynamic Network Models

    PubMed Central

    Snijders, Tom A.B.; Steglich, Christian E.G.

    2014-01-01

    Stochastic actor-based models for network dynamics have the primary aim of statistical inference about processes of network change, but may be regarded as a kind of agent-based models. Similar to many other agent-based models, they are based on local rules for actor behavior. Different from many other agent-based models, by including elements of generalized linear statistical models they aim to be realistic detailed representations of network dynamics in empirical data sets. Statistical parallels to micro-macro considerations can be found in the estimation of parameters determining local actor behavior from empirical data, and the assessment of goodness of fit from the correspondence with network-level descriptives. This article studies several network-level consequences of dynamic actor-based models applied to represent cross-sectional network data. Two examples illustrate how network-level characteristics can be obtained as emergent features implied by micro-specifications of actor-based models. PMID:25960578

  15. Associations of Physical Fitness and Academic Performance among Schoolchildren

    ERIC Educational Resources Information Center

    Van Dusen, Duncan P.; Kelder, Steven H.; Kohl, Harold W., III; Ranjit, Nalini; Perry, Cheryl L.

    2011-01-01

    Background: Public schools provide opportunities for physical activity and fitness surveillance, but are evaluated and funded based on students' academic performance, not their physical fitness. Empirical research evaluating the connections between fitness and academic performance is needed to justify curriculum allocations to physical activity…

  16. Sensitivity Analysis of Empirical Parameters in the Ionosphere-Plasmasphere Model

    DTIC Science & Technology

    2011-03-01

    IFM output grid (Gardner , 2010). The longitudes have to be fit as well because the IPM flux tubes form a type of ‘ S ...n d 2 0 d a y ru n , a n d th e ri g h t p lo t is th e d iff e re n c e . A ft e r d a y fi v e , th e m o d e l o u tp u t h a s re a ch e d st e a ...lo c a ti o n s * . T h

  17. The Shock and Vibration Bulletin. Part 1. Opening Session, Panel Session, Shock Analysis Shock Testing, Isolation and Damping.

    DTIC Science & Technology

    1977-09-01

    PREDICTION w200- 1 z 160 ENVE LOPE OF 40- E-ONFOR 11.2 des CONE AT 0 SSS~’~S *~ ~ - - - -S.** 5 " "- - , . ,* , 10 M03 40 60 60O 70 80 go BLAST...entire range ofmeasurements. 0 2 - The Hugoniot data given in the cited ref- 0 erences are too voluminous to reproduce in de - . S tail here, but selected...corps et speciale- * - ment dans lea gaz parfaits," J. de l’cole in empirical fits polytech. Paris, Vol. 57, 1887, and Vol.parameters iepicafts58, 1889

  18. Gradient Plasticity Model and its Implementation into MARMOT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Erin I.; Li, Dongsheng; Zbib, Hussein M.

    2013-08-01

    The influence of strain gradient on deformation behavior of nuclear structural materials, such as boby centered cubic (bcc) iron alloys has been investigated. We have developed and implemented a dislocation based strain gradient crystal plasticity material model. A mesoscale crystal plasticity model for inelastic deformation of metallic material, bcc steel, has been developed and implemented numerically. Continuum Dislocation Dynamics (CDD) with a novel constitutive law based on dislocation density evolution mechanisms was developed to investigate the deformation behaviors of single crystals, as well as polycrystalline materials by coupling CDD and crystal plasticity (CP). The dislocation density evolution law in thismore » model is mechanism-based, with parameters measured from experiments or simulated with lower-length scale models, not an empirical law with parameters back-fitted from the flow curves.« less

  19. Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Das, Samiran

    2018-04-01

    The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.

  20. Waiting time distribution in public health care: empirics and theory.

    PubMed

    Dimakou, Sofia; Dimakou, Ourania; Basso, Henrique S

    2015-12-01

    Excessive waiting times for elective surgery have been a long-standing concern in many national healthcare systems in the OECD. How do the hospital admission patterns that generate waiting lists affect different patients? What are the hospitals characteristics that determine waiting times? By developing a model of healthcare provision and analysing empirically the entire waiting time distribution we attempt to shed some light on those issues. We first build a theoretical model that describes the optimal waiting time distribution for capacity constraint hospitals. Secondly, employing duration analysis, we obtain empirical representations of that distribution across hospitals in the UK from 1997-2005. We observe important differences on the 'scale' and on the 'shape' of admission rates. Scale refers to how quickly patients are treated and shape represents trade-offs across duration-treatment profiles. By fitting the theoretical to the empirical distributions we estimate the main structural parameters of the model and are able to closely identify the main drivers of these empirical differences. We find that the level of resources allocated to elective surgery (budget and physical capacity), which determines how constrained the hospital is, explains differences in scale. Changes in benefits and costs structures of healthcare provision, which relate, respectively, to the desire to prioritise patients by duration and the reduction in costs due to delayed treatment, determine the shape, affecting short and long duration patients differently. JEL Classification I11; I18; H51.

  1. Low temperature heat capacities and thermodynamic functions described by Debye-Einstein integrals.

    PubMed

    Gamsjäger, Ernst; Wiessner, Manfred

    2018-01-01

    Thermodynamic data of various crystalline solids are assessed from low temperature heat capacity measurements, i.e., from almost absolute zero to 300 K by means of semi-empirical models. Previous studies frequently present fit functions with a large amount of coefficients resulting in almost perfect agreement with experimental data. It is, however, pointed out in this work that special care is required to avoid overfitting. Apart from anomalies like phase transformations, it is likely that data from calorimetric measurements can be fitted by a relatively simple Debye-Einstein integral with sufficient precision. Thereby, reliable values for the heat capacities, standard enthalpies, and standard entropies at T  = 298.15 K are obtained. Standard thermodynamic functions of various compounds strongly differing in the number of atoms in the formula unit can be derived from this fitting procedure and are compared to the results of previous fitting procedures. The residuals are of course larger when the Debye-Einstein integral is applied instead of using a high number of fit coefficients or connected splines, but the semi-empiric fit coefficients keep their meaning with respect to physics. It is suggested to use the Debye-Einstein integral fit as a standard method to describe heat capacities in the range between 0 and 300 K so that the derived thermodynamic functions are obtained on the same theory-related semi-empiric basis. Additional fitting is recommended when a precise description for data at ultra-low temperatures (0-20 K) is requested.

  2. A General Model for Estimating Macroevolutionary Landscapes.

    PubMed

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  3. Power laws in citation distributions: evidence from Scopus.

    PubMed

    Brzezinski, Michal

    Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.

  4. SIMPLE estimate of the free energy change due to aliphatic mutations: superior predictions based on first principles.

    PubMed

    Bueno, Marta; Camacho, Carlos J; Sancho, Javier

    2007-09-01

    The bioinformatics revolution of the last decade has been instrumental in the development of empirical potentials to quantitatively estimate protein interactions for modeling and design. Although computationally efficient, these potentials hide most of the relevant thermodynamics in 5-to-40 parameters that are fitted against a large experimental database. Here, we revisit this longstanding problem and show that a careful consideration of the change in hydrophobicity, electrostatics, and configurational entropy between the folded and unfolded state of aliphatic point mutations predicts 20-30% less false positives and yields more accurate predictions than any published empirical energy function. This significant improvement is achieved with essentially no free parameters, validating past theoretical and experimental efforts to understand the thermodynamics of protein folding. Our first principle analysis strongly suggests that both the solute-solute van der Waals interactions in the folded state and the electrostatics free energy change of exposed aliphatic mutations are almost completely compensated by similar interactions operating in the unfolded ensemble. Not surprisingly, the problem of properly accounting for the solvent contribution to the free energy of polar and charged group mutations, as well as of mutations that disrupt the protein backbone remains open. 2007 Wiley-Liss, Inc.

  5. Empirical Corrections to Nutation Amplitudes and Precession Computed from a Global VLBI Solution

    NASA Astrophysics Data System (ADS)

    Schuh, H.; Ferrandiz, J. M.; Belda-Palazón, S.; Heinkelmann, R.; Karbon, M.; Nilsson, T.

    2017-12-01

    The IAU2000A nutation and IAU2006 precession models were adopted to provide accurate estimations and predictions of the Celestial Intermediate Pole (CIP). However, they are not fully accurate and VLBI (Very Long Baseline Interferometry) observations show that the CIP deviates from the position resulting from the application of the IAU2006/2000A model. Currently, those deviations or offsets of the CIP (Celestial Pole Offsets - CPO), can only be obtained by the VLBI technique. The accuracy of the order of 0.1 milliseconds of arc (mas) allows to compare the observed nutation with theoretical prediction model for a rigid Earth and constrain geophysical parameters describing the Earth's interior. In this study, we empirically evaluate the consistency, systematics and deviations of the IAU 2006/2000A precession-nutation model using several CPO time series derived from the global analysis of VLBI sessions. The final objective is the reassessment of the precession offset and rate, and the amplitudes of the principal terms of nutation, trying to empirically improve the conventional values derived from the precession/nutation theories. The statistical analysis of the residuals after re-fitting the main nutation terms demonstrates that our empirical corrections attain an error reduction by almost 15 micro arc seconds.

  6. An Empirical Study on Raman Peak Fitting and Its Application to Raman Quantitative Research.

    PubMed

    Yuan, Xueyin; Mayanovic, Robert A

    2017-10-01

    Fitting experimentally measured Raman bands with theoretical model profiles is the basic operation for numerical determination of Raman peak parameters. In order to investigate the effects of peak modeling using various algorithms on peak fitting results, the representative Raman bands of mineral crystals, glass, fluids as well as the emission lines from a fluorescent lamp, some of which were measured under ambient light whereas others under elevated pressure and temperature conditions, were fitted using Gaussian, Lorentzian, Gaussian-Lorentzian, Voigtian, Pearson type IV, and beta profiles. From the fitting results of the Raman bands investigated in this study, the fitted peak position, intensity, area and full width at half-maximum (FWHM) values of the measured Raman bands can vary significantly depending upon which peak profile function is used in the fitting, and the most appropriate fitting profile should be selected depending upon the nature of the Raman bands. Specifically, the symmetric Raman bands of mineral crystals and non-aqueous fluids are best fit using Gaussian-Lorentzian or Voigtian profiles, whereas the asymmetric Raman bands are best fit using Pearson type IV profiles. The asymmetric O-H stretching vibrations of H 2 O and the Raman bands of soda-lime glass are best fit using several Gaussian profiles, whereas the emission lines from a florescent light are best fit using beta profiles. Multiple peaks that are not clearly separated can be fit simultaneously, provided the residuals in the fitting of one peak will not affect the fitting of the remaining peaks to a significant degree. Once the resolution of the Raman spectrometer has been properly accounted for, our findings show that the precision in peak position and intensity can be improved significantly by fitting the measured Raman peaks with appropriate profiles. Nevertheless, significant errors in peak position and intensity were still observed in the results from fitting of weak and wide Raman bands having unnormalized intensity/FWHM ratios lower than 200 counts/cm -1 .

  7. Soil Erosion as a stochastic process

    NASA Astrophysics Data System (ADS)

    Casper, Markus C.

    2015-04-01

    The main tools to provide estimations concerning risk and amount of erosion are different types of soil erosion models: on the one hand, there are empirically based model concepts on the other hand there are more physically based or process based models. However, both types of models have substantial weak points. All empirical model concepts are only capable of providing rough estimates over larger temporal and spatial scales, they do not account for many driving factors that are in the scope of scenario related analysis. In addition, the physically based models contain important empirical parts and hence, the demand for universality and transferability is not given. As a common feature, we find, that all models rely on parameters and input variables, which are to certain, extend spatially and temporally averaged. A central question is whether the apparent heterogeneity of soil properties or the random nature of driving forces needs to be better considered in our modelling concepts. Traditionally, researchers have attempted to remove spatial and temporal variability through homogenization. However, homogenization has been achieved through physical manipulation of the system, or by statistical averaging procedures. The price for obtaining this homogenized (average) model concepts of soils and soil related processes has often been a failure to recognize the profound importance of heterogeneity in many of the properties and processes that we study. Especially soil infiltrability and the resistance (also called "critical shear stress" or "critical stream power") are the most important empirical factors of physically based erosion models. The erosion resistance is theoretically a substrate specific parameter, but in reality, the threshold where soil erosion begins is determined experimentally. The soil infiltrability is often calculated with empirical relationships (e.g. based on grain size distribution). Consequently, to better fit reality, this value needs to be corrected experimentally. To overcome this disadvantage of our actual models, soil erosion models are needed that are able to use stochastic directly variables and parameter distributions. There are only some minor approaches in this direction. The most advanced is the model "STOSEM" proposed by Sidorchuk in 2005. In this model, only a small part of the soil erosion processes is described, the aggregate detachment and the aggregate transport by flowing water. The concept is highly simplified, for example, many parameters are temporally invariant. Nevertheless, the main problem is that our existing measurements and experiments are not geared to provide stochastic parameters (e.g. as probability density functions); in the best case they deliver a statistical validation of the mean values. Again, we get effective parameters, spatially and temporally averaged. There is an urgent need for laboratory and field experiments on overland flow structure, raindrop effects and erosion rate, which deliver information on spatial and temporal structure of soil and surface properties and processes.

  8. Boosted ARTMAP: modifications to fuzzy ARTMAP motivated by boosting theory.

    PubMed

    Verzi, Stephen J; Heileman, Gregory L; Georgiopoulos, Michael

    2006-05-01

    In this paper, several modifications to the Fuzzy ARTMAP neural network architecture are proposed for conducting classification in complex, possibly noisy, environments. The goal of these modifications is to improve upon the generalization performance of Fuzzy ART-based neural networks, such as Fuzzy ARTMAP, in these situations. One of the major difficulties of employing Fuzzy ARTMAP on such learning problems involves over-fitting of the training data. Structural risk minimization is a machine-learning framework that addresses the issue of over-fitting by providing a backbone for analysis as well as an impetus for the design of better learning algorithms. The theory of structural risk minimization reveals a trade-off between training error and classifier complexity in reducing generalization error, which will be exploited in the learning algorithms proposed in this paper. Boosted ART extends Fuzzy ART by allowing the spatial extent of each cluster formed to be adjusted independently. Boosted ARTMAP generalizes upon Fuzzy ARTMAP by allowing non-zero training error in an effort to reduce the hypothesis complexity and hence improve overall generalization performance. Although Boosted ARTMAP is strictly speaking not a boosting algorithm, the changes it encompasses were motivated by the goals that one strives to achieve when employing boosting. Boosted ARTMAP is an on-line learner, it does not require excessive parameter tuning to operate, and it reduces precisely to Fuzzy ARTMAP for particular parameter values. Another architecture described in this paper is Structural Boosted ARTMAP, which uses both Boosted ART and Boosted ARTMAP to perform structural risk minimization learning. Structural Boosted ARTMAP will allow comparison of the capabilities of off-line versus on-line learning as well as empirical risk minimization versus structural risk minimization using Fuzzy ARTMAP-based neural network architectures. Both empirical and theoretical results are presented to enhance the understanding of these architectures.

  9. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE PAGES

    Brito, Thiago V.; Morley, Steven K.

    2017-10-25

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  10. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, Thiago V.; Morley, Steven K.

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  11. Empirical Model of Precipitating Ion Oval

    NASA Astrophysics Data System (ADS)

    Goldstein, Jerry

    2017-10-01

    In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.

  12. Study of the influence of Type Ia supernovae environment on the Hubble diagram

    NASA Astrophysics Data System (ADS)

    Henne, Vincent

    2016-06-01

    The observational cosmology with distant Type Ia supernovae as standard candles claims that the Universe is in accelerated expansion, caused by a large fraction of dark energy. In this report we investigated SNe Ia environment, studying the impact of the nature of their host galaxies and their distance to the host galactic center on the Hubble diagram fitting. The supernovae used in the analysis were extracted from Joint-Light-curves-Analysis compilation of high-redshift and nearby supernovae. The analysis are based on the empirical fact that SN Ia luminosities depend on their light curve shapes and colors. No conclusive correlation between SN Ia light curve parameters and galocentric distance were identified. Concerning the host morphology, we showed that the stretch parameter of Type Ia supernovae is correlated with the host galaxy type. The supernovae with lower stretch mainly exploded in elliptical and lenticular galaxies. The studies show that into old star population and low dust environment, supernovae are fainter. We did not find any significant correlation between Type Ia supernovae color and host morphology. We confirm that supernova properties depend on their environment and propose to incorporate a host galaxy term into the Hubble diagram fit in the future cosmological analysis.

  13. Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction

    PubMed Central

    Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang

    2016-01-01

    Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638

  14. An empirical analysis of the Ebola outbreak in West Africa

    NASA Astrophysics Data System (ADS)

    Khaleque, Abdul; Sen, Parongama

    2017-02-01

    The data for the Ebola outbreak that occurred in 2014-2016 in three countries of West Africa are analysed within a common framework. The analysis is made using the results of an agent based Susceptible-Infected-Removed (SIR) model on a Euclidean network, where nodes at a distance l are connected with probability P(l) ∝ l-δ, δ determining the range of the interaction, in addition to nearest neighbors. The cumulative (total) density of infected population here has the form , where the parameters depend on δ and the infection probability q. This form is seen to fit well with the data. Using the best fitting parameters, the time at which the peak is reached is estimated and is shown to be consistent with the data. We also show that in the Euclidean model, one can choose δ and q values which reproduce the data for the three countries qualitatively. These choices are correlated with population density, control schemes and other factors. Comparing the real data and the results from the model one can also estimate the size of the actual population susceptible to the disease. Rescaling the real data a reasonably good quantitative agreement with the simulation results is obtained.

  15. Rain-rate data base development and rain-rate climate analysis

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1993-01-01

    The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.

  16. Semi-empirical and empirical L X-ray production cross sections for elements with 50 ⩽ Z ⩽ 92 for protons of 0.5 3.0 MeV

    NASA Astrophysics Data System (ADS)

    Nekab, M.; Kahoul, A.

    2006-04-01

    We present in this contribution, semi-empirical production cross sections of the main X-ray lines Lα, Lβ and Lγ for elements from Sn to U and for protons with energies varying from 0.5 to 3.0 MeV. The theoretical X-ray production cross sections are firstly calculated from the theoretical ionization cross sections of the L i ( i = 1, 2, 3) subshell within the ECPSSR theory. The semi-empirical Lα, Lβ and Lγ cross sections are then deduced by fitting the available experimental data normalized to their corresponding theoretical values and give the better representation of the experimental data in some cases. On the other hand, the experimental data are directly fitted to deduce the empirical L X-ray production cross sections. A comparison is made between the semi-empirical cross sections, the empirical cross sections reported in this work and the empirical ones reported by Reis and Jesus [M.A. Reis, A.P. Jesus, Atom. Data Nucl. Data Tables 63 (1996) 1] and those of Strivay and Weber [Strivay, G. Weber, Nucl. Instr. and Meth. B 190 (2002) 112].

  17. Empirically derived dimensional syndromes of self-reported psychopathology: Cross-cultural comparisons of Portuguese and US elders.

    PubMed

    Ivanova, Masha Y; Achenbach, Thomas; Leite, Manuela; Almeida, Vera; Caldas, Carlos; Turner, Lori; Dumas, Julie A

    2018-05-01

    As the world population ages, mental health professionals increasingly need empirically supported assessment instruments for older adult psychopathology. This study tested the degree to which syndromes derived from self-ratings of psychopathology by elders in the US would fit self-ratings by elders in Portugal. The Older Adult Self-Report (OASR) was completed by 352 60- to 102-year-olds in Portuguese community and residential settings. Confirmatory factor analyses tested the fit of the 7-syndrome OASR model to self-ratings by Portuguese elders. The primary fit index (Root Mean Square Error of Approximation) showed good fit, while secondary fit indices (the Comparative Fit Index and the Tucker-Lewis Index) showed acceptable fit. Loadings of 95 of the 97 items on their expected syndromes were statistically significant (mean = .63), indicating that the items measured the syndromes well. Correlations between latent factors, ie, between the hypothesized syndrome constructs measured by the items, averaged .66. The correlations between syndromes reflect varying degrees of comorbidity between problems comprising particular pairs of syndromes. The results support the syndrome structure of the OASR for Portuguese elders, offering Portuguese clinicians and researchers a useful instrument for assessing a broad spectrum of psychopathology. The results also offer a core of empirically supported taxonomic constructs of later life psychopathology as a basis for advancing clinical practice, training, and cross-cultural research. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Quantifying crustal thickness over time in magmatic arcs

    NASA Astrophysics Data System (ADS)

    Profeta, Lucia; Ducea, Mihai N.; Chapman, James B.; Paterson, Scott R.; Gonzales, Susana Marisol Henriquez; Kirsch, Moritz; Petrescu, Lucian; Decelles, Peter G.

    2015-12-01

    We present global and regional correlations between whole-rock values of Sr/Y and La/Yb and crustal thickness for intermediate rocks from modern subduction-related magmatic arcs formed around the Pacific. These correlations bolster earlier ideas that various geochemical parameters can be used to track changes of crustal thickness through time in ancient subduction systems. Inferred crustal thicknesses using our proposed empirical fits are consistent with independent geologic constraints for the Cenozoic evolution of the central Andes, as well as various Mesozoic magmatic arc segments currently exposed in the Coast Mountains, British Columbia, and the Sierra Nevada and Mojave-Transverse Range regions of California. We propose that these geochemical parameters can be used, when averaged over the typical lifetimes and spatial footprints of composite volcanoes and their intrusive equivalents to infer crustal thickness changes over time in ancient orogens.

  19. Quantifying crustal thickness over time in magmatic arcs

    PubMed Central

    Profeta, Lucia; Ducea, Mihai N.; Chapman, James B.; Paterson, Scott R.; Gonzales, Susana Marisol Henriquez; Kirsch, Moritz; Petrescu, Lucian; DeCelles, Peter G.

    2015-01-01

    We present global and regional correlations between whole-rock values of Sr/Y and La/Yb and crustal thickness for intermediate rocks from modern subduction-related magmatic arcs formed around the Pacific. These correlations bolster earlier ideas that various geochemical parameters can be used to track changes of crustal thickness through time in ancient subduction systems. Inferred crustal thicknesses using our proposed empirical fits are consistent with independent geologic constraints for the Cenozoic evolution of the central Andes, as well as various Mesozoic magmatic arc segments currently exposed in the Coast Mountains, British Columbia, and the Sierra Nevada and Mojave-Transverse Range regions of California. We propose that these geochemical parameters can be used, when averaged over the typical lifetimes and spatial footprints of composite volcanoes and their intrusive equivalents to infer crustal thickness changes over time in ancient orogens. PMID:26633804

  20. Analysis of turbine-grid interaction of grid-connected wind turbine using HHT

    NASA Astrophysics Data System (ADS)

    Chen, A.; Wu, W.; Miao, J.; Xie, D.

    2018-05-01

    This paper processes the output power of the grid-connected wind turbine with the denoising and extracting method based on Hilbert Huang transform (HHT) to discuss the turbine-grid interaction. At first, the detailed Empirical Mode Decomposition (EMD) and the Hilbert Transform (HT) are introduced. Then, on the premise of decomposing the output power of the grid-connected wind turbine into a series of Intrinsic Mode Functions (IMFs), energy ratio and power volatility are calculated to detect the unessential components. Meanwhile, combined with vibration function of turbine-grid interaction, data fitting of instantaneous amplitude and phase of each IMF is implemented to extract characteristic parameters of different interactions. Finally, utilizing measured data of actual parallel-operated wind turbines in China, this work accurately obtains the characteristic parameters of turbine-grid interaction of grid-connected wind turbine.

  1. Low resolution spectroscopic investigation of Am stars using Automated method

    NASA Astrophysics Data System (ADS)

    Sharma, Kaushal; Joshi, Santosh; Singh, Harinder P.

    2018-04-01

    The automated method of full spectrum fitting gives reliable estimates of stellar atmospheric parameters (Teff, log g and [Fe/H]) for late A, F, G, and early K type stars. Recently, the technique was further improved in the cooler regime and the validity range was extended up to a spectral type of M6 - M7 (Teff˜ 2900 K). The present study aims to explore the application of this method on the low-resolution spectra of Am stars, a class of chemically peculiar stars, to examine its robustness for these objects. We use ULySS with the Medium-resolution INT Library of Empirical Spectra (MILES) V2 spectral interpolator for parameter determination. The determined Teff and log g values are found to be in good agreement with those obtained from high-resolution spectroscopy.

  2. Monte Carlo modeling of fluorescence in semi-infinite turbid media

    NASA Astrophysics Data System (ADS)

    Ong, Yi Hong; Finlay, Jarod C.; Zhu, Timothy C.

    2018-02-01

    The incident field size and the interplay of absorption and scattering can influence the in-vivo light fluence rate distribution and complicate the absolute quantification of fluorophore concentration in-vivo. In this study, we use Monte Carlo simulations to evaluate the effect of incident beam radius and optical properties to the fluorescence signal collected by isotropic detector placed on the tissue surface. The optical properties at the excitation and emission wavelengths are assumed to be identical. We compute correction factors to correct the fluorescence intensity for variations due to incident field size and optical properties. The correction factors are fitted to a 4-parameters empirical correction function and the changes in each parameter are compared for various beam radius over a range of physiologically relevant tissue optical properties (μa = 0.1 - 1 cm-1 , μs'= 5 - 40 cm-1 ).

  3. Thermal inactivation kinetics of Lactococcus lactis subsp. lactis bacteriophage pll98-22.

    PubMed

    Sanlibaba, Pinar; Buzrul, S; Akkoç, Nefise; Alpas, H; Akçelik, M

    2009-03-01

    Survival curves of Lactococcus lactis subsp. lactis bacteriophage pll98 inactivated by heat were obtained at seven temperature values (50-80 degrees C) in M17 broth and skim milk. Deviations from first-order kinetics in both media were observed as sigmoidal shapes in the survival curves of pll98. An empirical model with four parameters was used to define the thermal inactivation. Number of parameters of the model was reduced from four to two in order to increase the robustness of the model. The reduced model produced comparable fits to the full model. Both the survival data and the calculations done using the reduced model (time necessary to reduce the number of phage pll98 six- or seven- log10) indicated that skim milk is a more protective medium than M17 broth within the assayed temperature range.

  4. Comparison among cognitive diagnostic models for the TIMSS 2007 fourth grade mathematics assessment.

    PubMed

    Yamaguchi, Kazuhiro; Okada, Kensuke

    2018-01-01

    A variety of cognitive diagnostic models (CDMs) have been developed in recent years to help with the diagnostic assessment and evaluation of students. Each model makes different assumptions about the relationship between students' achievement and skills, which makes it important to empirically investigate which CDMs better fit the actual data. In this study, we examined this question by comparatively fitting representative CDMs to the Trends in International Mathematics and Science Study (TIMSS) 2007 assessment data across seven countries. The following two major findings emerged. First, in accordance with former studies, CDMs had a better fit than did the item response theory models. Second, main effects models generally had a better fit than other parsimonious or the saturated models. Related to the second finding, the fit of the traditional parsimonious models such as the DINA and DINO models were not optimal. The empirical educational implications of these findings are discussed.

  5. Comparison among cognitive diagnostic models for the TIMSS 2007 fourth grade mathematics assessment

    PubMed Central

    Okada, Kensuke

    2018-01-01

    A variety of cognitive diagnostic models (CDMs) have been developed in recent years to help with the diagnostic assessment and evaluation of students. Each model makes different assumptions about the relationship between students’ achievement and skills, which makes it important to empirically investigate which CDMs better fit the actual data. In this study, we examined this question by comparatively fitting representative CDMs to the Trends in International Mathematics and Science Study (TIMSS) 2007 assessment data across seven countries. The following two major findings emerged. First, in accordance with former studies, CDMs had a better fit than did the item response theory models. Second, main effects models generally had a better fit than other parsimonious or the saturated models. Related to the second finding, the fit of the traditional parsimonious models such as the DINA and DINO models were not optimal. The empirical educational implications of these findings are discussed. PMID:29394257

  6. Estimating non-isothermal bacterial growth in foods from isothermal experimental data.

    PubMed

    Corradini, M G; Peleg, M

    2005-01-01

    To develop a mathematical method to estimate non-isothermal microbial growth curves in foods from experiments performed under isothermal conditions and demonstrate the method's applicability with published growth data. Published isothermal growth curves of Pseudomonas spp. in refrigerated fish at 0-8 degrees C and Escherichia coli 1952 in a nutritional broth at 27.6-36 degrees C were fitted with two different three-parameter 'primary models' and the temperature dependence of their parameters was fitted by ad hoc empirical 'secondary models'. These were used to generate non-isothermal growth curves by solving, numerically, a differential equation derived on the premise that the momentary non-isothermal growth rate is the isothermal rate at the momentary temperature, at a time that corresponds to the momentary growth level of the population. The predicted non-isothermal growth curves were in agreement with the reported experimental ones and, as expected, the quality of the predictions did not depend on the 'primary model' chosen for the calculation. A common type of sigmoid growth curve can be adequately described by three-parameter 'primary models'. At least in the two systems examined, these could be used to predict growth patterns under a variety of continuous and discontinuous non-isothermal temperature profiles. The described mathematical method whenever validated experimentally will enable the simulation of the microbial quality of stored and transported foods under a large variety of existing or contemplated commercial temperature histories.

  7. "Doing for Group Exercise What McDonald's Did for Hamburgers": Les Mills, and the Fitness Professional as Global Traveller

    ERIC Educational Resources Information Center

    Andreasson, Jesper; Johansson, Thomas

    2016-01-01

    This article analyses fitness professionals' perceptions and understanding of their occupational education and pedagogical pursuance, framed within the emergence of a global fitness industry. The empirical material consists of interviews with personal trainers and group fitness instructors, as well as observations in their working environment. In…

  8. Ridit Analysis for Cooper-Harper and Other Ordinal Ratings for Sparse Data - A Distance-based Approach

    DTIC Science & Technology

    2016-09-01

    is to fit empirical Beta distributions to observed data, and then to use a randomization approach to make inferences on the difference between...a Ridit analysis on the often sparse data sets in many Flying Qualities applicationsi. The method of this paper is to fit empirical Beta ...One such measure is the discrete- probability-distribution version of the (squared) ‘Hellinger Distance’ (Yang & Le Cam , 2000) 2(, ) = 1

  9. Testing for Questionable Research Practices in a Meta-Analysis: An Example from Experimental Parapsychology

    PubMed Central

    Bierman, Dick J.; Spottiswoode, James P.; Bijl, Aron

    2016-01-01

    We describe a method of quantifying the effect of Questionable Research Practices (QRPs) on the results of meta-analyses. As an example we simulated a meta-analysis of a controversial telepathy protocol to assess the extent to which these experimental results could be explained by QRPs. Our simulations used the same numbers of studies and trials as the original meta-analysis and the frequencies with which various QRPs were applied in the simulated experiments were based on surveys of experimental psychologists. Results of both the meta-analysis and simulations were characterized by 4 metrics, two describing the trial and mean experiment hit rates (HR) of around 31%, where 25% is expected by chance, one the correlation between sample-size and hit-rate, and one the complete P-value distribution of the database. A genetic algorithm optimized the parameters describing the QRPs, and the fitness of the simulated meta-analysis was defined as the sum of the squares of Z-scores for the 4 metrics. Assuming no anomalous effect a good fit to the empirical meta-analysis was found only by using QRPs with unrealistic parameter-values. Restricting the parameter space to ranges observed in studies of QRP occurrence, under the untested assumption that parapsychologists use comparable QRPs, the fit to the published Ganzfeld meta-analysis with no anomalous effect was poor. We allowed for a real anomalous effect, be it unidentified QRPs or a paranormal effect, where the HR ranged from 25% (chance) to 31%. With an anomalous HR of 27% the fitness became F = 1.8 (p = 0.47 where F = 0 is a perfect fit). We conclude that the very significant probability cited by the Ganzfeld meta-analysis is likely inflated by QRPs, though results are still significant (p = 0.003) with QRPs. Our study demonstrates that quantitative simulations of QRPs can assess their impact. Since meta-analyses in general might be polluted by QRPs, this method has wide applicability outside the domain of experimental parapsychology. PMID:27144889

  10. Numerical analysis of the effect of the kind of activating agent and the impregnation ratio on the parameters of the microporous structure of the active carbons

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Mirosław

    2015-09-01

    The paper presents the results of the research on the application of the LBET class adsorption models with the fast multivariant identification procedure as a tool for analysing the microporous structure of the active carbons obtained by chemical activation using potassium and sodium hydroxides as an activator. The proposed technique of the fast multivariant fitting of the LBET class models to the empirical adsorption data was employed particularly to evaluate the impact of the used activator and the impregnation ratio on the obtained microporous structure of the carbonaceous adsorbents.

  11. Molecular dynamics force-field refinement against quasi-elastic neutron scattering data

    DOE PAGES

    Borreguero Calvo, Jose M.; Lynch, Vickie E.

    2015-11-23

    Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less

  12. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    PubMed

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  13. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    PubMed Central

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  14. Thermodynamic and acoustical properties of mixtures p-anisaldehyde—alkanols (C1-C4)—2-methyl-1-propanol at 303.15 K

    NASA Astrophysics Data System (ADS)

    Saini, Balwinder; Kumar, Ashwani; Rani, Ruby; Bamezai, Rajinder K.

    2016-07-01

    The density, viscosity and speed of sound of pure p-anisaldehyde and some alkanols, for example, methanol, ethanol, propan-1-ol, propan-2-ol, butan-1-ol, butan-2-ol, 2-methylpropan-1-ol, and the binary mixtures of p-anisaldehyde with these alkanols were measured over the entire composition range at 303.15 K. From the experimental data, various thermodynamic parameters such as excess molar volume ( V E), excess Gibbs free energy of activation (Δ G*E), and deviation parameters like viscosity (Δη), speed of sound (Δ u), isentropic compressibility (Δκs), are calculated. The excess as well as deviation parameters are fitted to Redlich—Kister equation. Additionally, the viscosity data for the systems has been used to correlate the application of empirical relation given by Grunberg and Nissan, Katti and Chaudhari, and Hind et al. The results are discussed in terms of specific interactions present in the mixtures.

  15. Enhancement of oxygen mass transfer and gas holdup using palm oil in stirred tank bioreactors with xanthan solutions as simulated viscous fermentation broths.

    PubMed

    Mohd Sauid, Suhaila; Krishnan, Jagannathan; Huey Ling, Tan; Veluri, Murthy V P S

    2013-01-01

    Volumetric mass transfer coefficient (kLa) is an important parameter in bioreactors handling viscous fermentations such as xanthan gum production, as it affects the reactor performance and productivity. Published literatures showed that adding an organic phase such as hydrocarbons or vegetable oil could increase the kLa. The present study opted for palm oil as the organic phase as it is plentiful in Malaysia. Experiments were carried out to study the effect of viscosity, gas holdup, and kLa on the xanthan solution with different palm oil fractions by varying the agitation rate and aeration rate in a 5 L bench-top bioreactor fitted with twin Rushton turbines. Results showed that 10% (v/v) of palm oil raised the kLa of xanthan solution by 1.5 to 3 folds with the highest kLa value of 84.44 h(-1). It was also found that palm oil increased the gas holdup and viscosity of the xanthan solution. The kLa values obtained as a function of power input, superficial gas velocity, and palm oil fraction were validated by two different empirical equations. Similarly, the gas holdup obtained as a function of power input and superficial gas velocity was validated by another empirical equation. All correlations were found to fit well with higher determination coefficients.

  16. An equation for pressure of a two-dimensional Yukawa liquid

    NASA Astrophysics Data System (ADS)

    Feng, Yan; Li, Wei; Wang, Qiaoling; Lin, Wei; Goree, John; Liu, Bin

    2016-10-01

    Thermodynamic behavior of two-dimensional (2D) dusty plasmas has been studied experimentally and theoretically recently. As a crucial parameter in thermodynamics, the pressure of dusty plasmas arises from frequent collisions of individual dust particles. Here, equilibrium molecular dynamical simulations were performed to study the pressure of 2D Yukawa liquids. A simple analytical expression for the pressure of a 2D Yukawa liquid is found by fitting the obtained pressure data over a wide range of temperatures, from the coldest close to the melting point, to the hottest about 70 times higher than the melting points. The obtained expression verifies that the pressure can be written as the sum of a potential term which is a simple multiple of the Coulomb potential energy at a distance of Wigner-Seitz radius, and a kinetic term which is a multiple of the one for an ideal gas. Dimensionless coefficients for each of these terms are found empirically, by fitting. The resulting analytical expression, with its empirically determined coefficients, is plotted as isochors, or curves of constant area. These results should be applicable to 2D dusty plasmas. Work in China supported by by the National Natural Science Foundation of China under Grant No. 11505124, the 1000 Youth Talents Plan, and startup funds from Soochow University. Work in the US supported by DOE & NSF.

  17. The V-band Empirical Mass-luminosity Relation for Main Sequence Stars

    NASA Astrophysics Data System (ADS)

    Xia, Fang; Fu, Yan-Ning

    2010-07-01

    Stellar mass is an indispensable parameter in the studies of stellar physics and stellar dynamics. On the one hand, the most reliable way to determine the stellar dynamical mass is via orbital determinations of binaries. On the other hand, however, most stellar masses have to be estimated by using the mass luminosity relation (MLR). Therefore, it is important to obtain the empirical MLR through fitting the data of stellar dynamical mass and luminosity. The effect of metallicity can make this relation disperse in the V-band, but studies show that this is mainly limited to the case when the stellar mass is less than 0.6M⊙ Recently, many relevant data have been accumulated for main sequence stars with larger masses, which make it possible to significantly improve the corresponding MLR. Using a fitting method which can reasonably assign weights to the observational data including two quantities with different dimensions, we obtain a V-band MLR based on the dynamical masses and luminosities of 203 main sequence stars. In comparison with the previous work, the improved MLR is statistically significant, and the relative error of mass estimation reaches about 5%. Therefore, our MLR is useful not only in the studies of statistical nature, but also in the studies of concrete stellar systems, such as the long-term dynamical study and the short-term positioning study of a specific multiple star system.

  18. The V Band Empirical Mass-Luminosity Relation for Main Sequence Stars

    NASA Astrophysics Data System (ADS)

    Xia, F.; Fu, Y. N.

    2010-01-01

    Stellar mass is an indispensable parameter in the studies of stellar physics and stellar dynamics. On the one hand, the most reliable way to determine the stellar dynamical mass is via orbital determination of binaries. On the other hand, however, most stellar masses have to be estimated by using the mass-luminosity relation (MLR). Therefore, it is important to obtain the empirical MLR through fitting the data of stellar dynamical mass and luminosity. The effect of metallicity can make this relation disperse in the V-band, but studies show that this is mainly limited to the case when the stellar mass is less than 0.6M⊙. Recently, many relevant data have been accumulated for main sequence stars with larger mass, which make it possible to significantly improve the corresponding MLR. Using a fitting method which can reasonably assign weight to the observational data including two quantities with different dimensions, we obtain a V-band MLR based on the dynamical masses and luminosities of 203 main sequence stars. Compared with the previous work, the improved MLR is statistically significant, and the relative error of mass estimation reaches about 5%. Therefore, our MLR is useful not only in studies of statistical nature, but also in studies of concrete stellar systems, such as the long-term dynamical study and the short-term positioning study of a specific multiple star system.

  19. Oscillation mechanics of the respiratory system.

    PubMed

    Bates, Jason H T; Irvin, Charles G; Farré, Ramon; Hantos, Zoltán

    2011-07-01

    The mechanical impedance of the respiratory system defines the pressure profile required to drive a unit of oscillatory flow into the lungs. Impedance is a function of oscillation frequency, and is measured using the forced oscillation technique. Digital signal processing methods, most notably the Fourier transform, are used to calculate impedance from measured oscillatory pressures and flows. Impedance is a complex function of frequency, having both real and imaginary parts that vary with frequency in ways that can be used empirically to distinguish normal lung function from a variety of different pathologies. The most useful diagnostic information is gained when anatomically based mathematical models are fit to measurements of impedance. The simplest such model consists of a single flow-resistive conduit connecting to a single elastic compartment. Models of greater complexity may have two or more compartments, and provide more accurate fits to impedance measurements over a variety of different frequency ranges. The model that currently enjoys the widest application in studies of animal models of lung disease consists of a single airway serving an alveolar compartment comprising tissue with a constant-phase impedance. This model has been shown to fit very accurately to a wide range of impedance data, yet contains only four free parameters, and as such is highly parsimonious. The measurement of impedance in human patients is also now rapidly gaining acceptance, and promises to provide a more comprehensible assessment of lung function than parameters derived from conventional spirometry. © 2011 American Physiological Society.

  20. Sorption of organic gases in a furnished room

    NASA Astrophysics Data System (ADS)

    Singer, Brett C.; Revzan, Kenneth L.; Hotchi, Toshifumi; Hodgson, Alfred T.; Brown, Nancy J.

    We present experimental data and semi-empirical models describing the sorption of organic gases in a simulated indoor residential environment. Two replicate experiments were conducted with 20 volatile organic compounds (VOCs) in a 50-m 3 room finished with painted wallboard, carpet and cushion, draperies and furnishings. The VOCs span a wide volatility range and include ten hazardous air pollutants. VOCs were introduced to the static chamber as a pulse and their gas-phase concentrations were measured during a net adsorption period and a subsequent net desorption period. Three sorption models were fit to the measured concentrations for each compound to determine the simplest formulation needed to adequately describe the observed behavior. Sorption parameter values were determined by fitting the models to adsorption period data then checked by comparing measured and predicted behavior during desorption. The adequacy of each model was evaluated using a goodness of fit parameter calculated for each period. Results indicate that sorption usually does not greatly affect indoor concentrations of methyl- tert-butyl ether, 2-butanone, isoprene and benzene. In contrast, sorption appears to be a relevant indoor process for many of the VOCs studied, including C 8-C 10 aromatic hydrocarbons (HC), terpenes, and pyridine. These compounds sorbed at rates close to typical residential air change rates and exhibited substantial sorptive partitioning at equilibrium. Polycyclic aromatic HCs, aromatic alcohols, ethenylpyridine and nicotine initially adsorbed to surfaces at rates of 1.5->6 h -1 and partitioned 95->99% in the sorbed phase at equilibrium.

  1. Investigation of empirical damping laws for the space shuttle

    NASA Technical Reports Server (NTRS)

    Bernstein, E. L.

    1973-01-01

    An analysis of dynamic test data from vibration testing of a number of aerospace vehicles was made to develop an empirical structural damping law. A systematic attempt was made to fit dissipated energy/cycle to combinations of all dynamic variables. The best-fit laws for bending, torsion, and longitudinal motion are given, with error bounds. A discussion and estimate are made of error sources. Programs are developed for predicting equivalent linear structural damping coefficients and finding the response of nonlinearly damped structures.

  2. Testing Vocational Interests and Personality as Predictors of Person-Vocation and Person-Job Fit

    ERIC Educational Resources Information Center

    Ehrhart, Karen Holcombe; Makransky, Guido

    2007-01-01

    The fit between individuals and their work environments has received decades of theoretical and empirical attention. This study investigated two antecedents to individuals' perceptions of fit: vocational interests and personality. More specifically, the authors hypothesized that vocational interests assessed in terms of the Career Occupational…

  3. A New Look at the Eclipse Timing Variation Diagram Analysis of Selected 3-body W UMa Systems

    NASA Astrophysics Data System (ADS)

    Christopoulou, P.-E.; Papageorgiou, A.

    2015-07-01

    The light travel effect produced by the presence of tertiary components can reveal much about the origin and evolution of over-contact binaries. Monitoring of W UMa systems over the last decade and/or the use of publicly available photometric surveys (NSVS, ASAS, etc.) has uncovered or suggested the presence of many unseen companions, which calls for an in-depth investigation of the parameters derived from cyclic period variations in order to confirm or reject the assumption of hidden companion(s). Progress in the analysis of eclipse timing variations is summarized here both from the empirical and the theoretical points of view, and a more extensive investigation of the proposed orbital parameters of third bodies is proposed. The code we have developed for this, implemented in Python, is set up to handle heuristic scanning with parameter perturbation in parameter space, and to establish realistic uncertainties from the least squares fitting. A computational example is given for TZ Boo, a W UMa system with a spectroscopically detected third component. Future options to be implemented include MCMC and bootstrapping.

  4. Colloid Transport in Saturated Porous Media: Elimination of Attachment Efficiency in a New Colloid Transport Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.

    A new colloid transport model is introduced that is conceptually simple but captures the essential features of complicated attachment and detachment behavior of colloids when conditions of secondary minimum attachment exist. This model eliminates the empirical concept of collision efficiency; the attachment rate is computed directly from colloid filtration theory. Also, a new paradigm for colloid detachment based on colloid population heterogeneity is introduced. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of colloids that attach irreversibly and (2) the rate at which reversibly attached colloids leave themore » surface. These two parameters were correlated to physical parameters that control colloid transport such as the depth of the secondary minimum and pore water velocity. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport. This model can be extended to heterogeneous systems characterized by both primary and secondary minimum deposition by simply increasing the fraction of colloids that attach irreversibly.« less

  5. Degree of Ice Particle Surface Roughness Inferred from Polarimetric Observations

    NASA Technical Reports Server (NTRS)

    Hioki, Souichiro; Yang, Ping; Baum, Bryan A.; Platnick, Steven; Meyer, Kerry G.; King, Michael D.; Riedi, Jerome

    2016-01-01

    The degree of surface roughness of ice particles within thick, cold ice clouds is inferred from multidirectional, multi-spectral satellite polarimetric observations over oceans, assuming a column-aggregate particle habit. An improved roughness inference scheme is employed that provides a more noise-resilient roughness estimate than the conventional best-fit approach. The improvements include the introduction of a quantitative roughness parameter based on empirical orthogonal function analysis and proper treatment of polarization due to atmospheric scattering above clouds. A global 1-month data sample supports the use of a severely roughened ice habit to simulate the polarized reflectivity associated with ice clouds over ocean. The density distribution of the roughness parameter inferred from the global 1- month data sample and further analyses of a few case studies demonstrate the significant variability of ice cloud single-scattering properties. However, the present theoretical results do not agree with observations in the tropics. In the extra-tropics, the roughness parameter is inferred but 74% of the sample is out of the expected parameter range. Potential improvements are discussed to enhance the depiction of the natural variability on a global scale.

  6. Empirical study on a directed and weighted bus transport network in China

    NASA Astrophysics Data System (ADS)

    Feng, Shumin; Hu, Baoyu; Nie, Cen; Shen, Xianghao

    2016-01-01

    Bus transport networks are directed complex networks that consist of routes, stations, and passenger flow. In this study, the concept of duplication factor is introduced to analyze the differences between uplinks and downlinks for the bus transport network of Harbin (BTN-H). Further, a new representation model for BTNs is proposed, named as directed-space P. Two empirical characteristics of BTN-H are reported in this paper. First, the cumulative distributions of weighted degree, degree, number of routes that connect to each station, and node weight (peak-hour trips at a station) uniformly follow the exponential law. Meanwhile, the node weight shows positive correlations with the corresponding weighted degree, degree, and number of routes that connect to a station. Second, a new richness parameter of a node is explored by its node weight and the connectivity, weighted connectivity, average shortest path length and efficiency between rich nodes can be fitted by composite exponential functions to demonstrate the rich-club phenomenon.

  7. Empirical Analysis of the Photoelectrochemical Impedance Response of Hematite Photoanodes for Water Photo-oxidation.

    PubMed

    Klotz, Dino; Grave, Daniel A; Dotan, Hen; Rothschild, Avner

    2018-03-15

    Photoelectrochemical impedance spectroscopy (PEIS) is a useful tool for the characterization of photoelectrodes for solar water splitting. However, the analysis of PEIS spectra often involves a priori assumptions that might bias the results. This work puts forward an empirical method that analyzes the distribution of relaxation times (DRT), obtained directly from the measured PEIS spectra of a model hematite photoanode. By following how the DRT evolves as a function of control parameters such as the applied potential and composition of the electrolyte solution, we obtain unbiased insights into the underlying mechanisms that shape the photocurrent. In a subsequent step, we fit the data to a process-oriented equivalent circuit model (ECM) whose makeup is derived from the DRT analysis in the first step. This yields consistent quantitative trends of the dominant polarization processes observed. Our observations reveal a common step for the photo-oxidation reactions of water and H 2 O 2 in alkaline solution.

  8. Development of an analytical solution for the Budyko watershed parameter in terms of catchment physical features

    NASA Astrophysics Data System (ADS)

    Reaver, N.; Kaplan, D. A.; Jawitz, J. W.

    2017-12-01

    The Budyko hypothesis states that a catchment's long-term water and energy balances are dependent on two relatively easy to measure quantities: rainfall depth and potential evaporation. This hypothesis is expressed as a simple function, the Budyko equation, which allows for the prediction of a catchment's actual evapotranspiration and discharge from measured rainfall depth and potential evaporation, data which are widely available. However, the two main analytically derived forms of the Budyko equation contain a single unknown watershed parameter, whose value varies across catchments; variation in this parameter has been used to explain the hydrological behavior of different catchments. The watershed parameter is generally thought of as a lumped quantity that represents the influence of all catchment biophysical features (e.g. soil type and depth, vegetation type, timing of rainfall, etc). Previous work has shown that the parameter is statistically correlated with catchment properties, but an explicit expression has been elusive. While the watershed parameter can be determined empirically by fitting the Budyko equation to measured data in gauged catchments where actual evapotranspiration can be estimated, this limits the utility of the framework for predicting impacts to catchment hydrology due to changing climate and land use. In this study, we developed an analytical solution for the lumped catchment parameter for both forms of the Budyko equation. We combined these solutions with a statistical soil moisture model to obtain analytical solutions for the Budyko equation parameter as a function of measurable catchment physical features, including rooting depth, soil porosity, and soil wilting point. We tested the predictive power of these solutions using the U.S. catchments in the MOPEX database. We also compared the Budyko equation parameter estimates generated from our analytical solutions (i.e. predicted parameters) with those obtained through the calibration of the Budyko equation to discharge data (i.e. empirical parameters), and found good agreement. These results suggest that it is possible to predict the Budyko equation watershed parameter directly from physical features, even for ungauged catchments.

  9. The ACTIVE conceptual framework as a structural equation model.

    PubMed

    Gross, Alden L; Payne, Brennan R; Casanova, Ramon; Davoudzadeh, Pega; Dzierzewski, Joseph M; Farias, Sarah; Giovannetti, Tania; Ip, Edward H; Marsiske, Michael; Rebok, George W; Schaie, K Warner; Thomas, Kelsey; Willis, Sherry; Jones, Richard N

    2018-01-01

    Background/Study Context: Conceptual frameworks are analytic models at a high level of abstraction. Their operationalization can inform randomized trial design and sample size considerations. The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) conceptual framework was empirically tested using structural equation modeling (N=2,802). ACTIVE was guided by a conceptual framework for cognitive training in which proximal cognitive abilities (memory, inductive reasoning, speed of processing) mediate treatment-related improvement in primary outcomes (everyday problem-solving, difficulty with activities of daily living, everyday speed, driving difficulty), which in turn lead to improved secondary outcomes (health-related quality of life, health service utilization, mobility). Measurement models for each proximal, primary, and secondary outcome were developed and tested using baseline data. Each construct was then combined in one model to evaluate fit (RMSEA, CFI, normalized residuals of each indicator). To expand the conceptual model and potentially inform future trials, evidence of modification of structural model parameters was evaluated by age, years of education, sex, race, and self-rated health status. Preconceived measurement models for memory, reasoning, speed of processing, everyday problem-solving, instrumental activities of daily living (IADL) difficulty, everyday speed, driving difficulty, and health-related quality of life each fit well to the data (all RMSEA < .05; all CFI > .95). Fit of the full model was excellent (RMSEA = .038; CFI = .924). In contrast with previous findings from ACTIVE regarding who benefits from training, interaction testing revealed associations between proximal abilities and primary outcomes are stronger on average by nonwhite race, worse health, older age, and less education (p < .005). Empirical data confirm the hypothesized ACTIVE conceptual model. Findings suggest that the types of people who show intervention effects on cognitive performance potentially may be different from those with the greatest chance of transfer to real-world activities.

  10. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data.

    PubMed

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu

    2017-03-27

    A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).

  11. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data

    DOE PAGES

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.; ...

    2017-02-23

    Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less

  12. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.

    Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less

  13. Assessment of spatial distrilbution of porosity and aquifer geohydraulic parameters in parts of the Tertiary - Quaternary hydrogeoresource of south-eastern Nigeria

    NASA Astrophysics Data System (ADS)

    George, N. J.; Akpan, A. E.; Akpan, F. S.

    2017-12-01

    An integrated attempt exploring information deduced from extensive surface resistivity study in three Local Government Areas of Akwa Ibom State, Nigeria and data from hydrogeological sources obtained from water boreholes have been explored to economically estimate porosity and coefficient of permeability/hydraulic conductivity in parts of the clastic Tertiary - Quaternary sediments of the Niger Delta region. Generally, these parameters are predominantly estimated from empirical analysis of core samples and pumping test data generated from boreholes in the laboratory. However, this analysis is not only costly and time consuming, but also limited in areal coverage. The chosen technique employs surface resistivity data, core samples and pumping test data in order to estimate porosity and aquifer hydraulic parameters (transverse resistance, hydraulic conductivity and transmissivity). In correlating the two sets of results, Porosity and hydraulic conductivity were observed to be more elevated near the riverbanks. Empirical models utilising Archie's, Waxman-Smits and Kozeny-Carman Bear relations were employed characterising the formation parameters with wonderfully deduced good fits. The effect of surface conduction occasioned by clay usually disregarded or ignored in Archie's model was estimated to be 2.58 × 10-5 Siemens. This conductance can be used as a corrective factor to the conduction values obtained from Archie's equation. Interpretation aided measures such as graphs, mathematical models and maps which geared towards realistic conclusions and interrelationship between the porosity and other aquifer parameters were generated. The values of the hydraulic conductivity estimated from Waxman-Smits model was approximately 9.6 × 10-5m/s everywhere. This revelation indicates that there is no pronounced change in the quality of the saturating fluid and the geological formations that serve as aquifers even though the porosities were varying. The deciphered parameter relations can be used to estimate geohydraulic parameters in other locations with little or no borehole data.

  14. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  15. A charge optimized many-body potential for titanium nitride (TiN).

    PubMed

    Cheng, Y-T; Liang, T; Martinez, J A; Phillpot, S R; Sinnott, S B

    2014-07-02

    This work presents a new empirical, variable charge potential for TiN systems in the charge-optimized many-body potential framework. The potential parameters were determined by fitting them to experimental data for the enthalpy of formation, lattice parameters, and elastic constants of rocksalt structured TiN. The potential does a good job of describing the fundamental physical properties (defect formation and surface energies) of TiN relative to the predictions of first-principles calculations. This potential is used in classical molecular dynamics simulations to examine the interface of fcc-Ti(0 0 1)/TiN(0 0 1) and to characterize the adsorption of oxygen atoms and molecules on the TiN(0 0 1) surface. The results indicate that the potential is well suited to model TiN thin films and to explore the chemistry associated with their oxidation.

  16. Investigating the effects of the fixed and varying dispersion parameters of Poisson-gamma models on empirical Bayes estimates.

    PubMed

    Lord, Dominique; Park, Peter Young-Jin

    2008-07-01

    Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.

  17. Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2011-01-01

    The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…

  18. Solar-wind predictions for the Parker Solar Probe orbit. Near-Sun extrapolations derived from an empirical solar-wind model based on Helios and OMNI observations

    NASA Astrophysics Data System (ADS)

    Venzmer, M. S.; Bothmer, V.

    2018-03-01

    Context. The Parker Solar Probe (PSP; formerly Solar Probe Plus) mission will be humanitys first in situ exploration of the solar corona with closest perihelia at 9.86 solar radii (R⊙) distance to the Sun. It will help answer hitherto unresolved questions on the heating of the solar corona and the source and acceleration of the solar wind and solar energetic particles. The scope of this study is to model the solar-wind environment for PSPs unprecedented distances in its prime mission phase during the years 2018 to 2025. The study is performed within the Coronagraphic German And US SolarProbePlus Survey (CGAUSS) which is the German contribution to the PSP mission as part of the Wide-field Imager for Solar PRobe. Aim. We present an empirical solar-wind model for the inner heliosphere which is derived from OMNI and Helios data. The German-US space probes Helios 1 and Helios 2 flew in the 1970s and observed solar wind in the ecliptic within heliocentric distances of 0.29 au to 0.98 au. The OMNI database consists of multi-spacecraft intercalibrated in situ data obtained near 1 au over more than five solar cycles. The international sunspot number (SSN) and its predictions are used to derive dependencies of the major solar-wind parameters on solar activity and to forecast their properties for the PSP mission. Methods: The frequency distributions for the solar-wind key parameters, magnetic field strength, proton velocity, density, and temperature, are represented by lognormal functions. In addition, we consider the velocity distributions bi-componental shape, consisting of a slower and a faster part. Functional relations to solar activity are compiled with use of the OMNI data by correlating and fitting the frequency distributions with the SSN. Further, based on the combined data set from both Helios probes, the parameters frequency distributions are fitted with respect to solar distance to obtain power law dependencies. Thus an empirical solar-wind model for the inner heliosphere confined to the ecliptic region is derived, accounting for solar activity and for solar distance through adequate shifts of the lognormal distributions. Finally, the inclusion of SSN predictions and the extrapolation down to PSPs perihelion region enables us to estimate the solar-wind environment for PSPs planned trajectory during its mission duration. Results: The CGAUSS empirical solar-wind model for PSP yields dependencies on solar activity and solar distance for the solar-wind parameters' frequency distributions. The estimated solar-wind median values for PSPs first perihelion in 2018 at a solar distance of 0.16 au are 87 nT, 340 km s-1, 214 cm-3, and 503 000 K. The estimates for PSPs first closest perihelion, occurring in 2024 at 0.046 au (9.86 R⊙), are 943 nT, 290 km s-1, 2951 cm-3, and 1 930 000 K. Since the modeled velocity and temperature values below approximately 20 R⊙appear overestimated in comparison with existing observations, this suggests that PSP will directly measure solar-wind acceleration and heating processes below 20 R⊙ as planned.

  19. Competition between global and local online social networks

    NASA Astrophysics Data System (ADS)

    Kleineberg, Kaj-Kolja; Boguñá, Marián

    2016-04-01

    The overwhelming success of online social networks, the key actors in the Web 2.0 cosmos, has reshaped human interactions globally. To help understand the fundamental mechanisms which determine the fate of online social networks at the system level, we describe the digital world as a complex ecosystem of interacting networks. In this paper, we study the impact of heterogeneity in network fitnesses on the competition between an international network, such as Facebook, and local services. The higher fitness of international networks is induced by their ability to attract users from all over the world, which can then establish social interactions without the limitations of local networks. In other words, inter-country social ties lead to increased fitness of the international network. To study the competition between an international network and local ones, we construct a 1:1000 scale model of the digital world, consisting of the 80 countries with the most Internet users. Under certain conditions, this leads to the extinction of local networks; whereas under different conditions, local networks can persist and even dominate completely. In particular, our model suggests that, with the parameters that best reproduce the empirical overtake of Facebook, this overtake could have not taken place with a significant probability.

  20. Competition between global and local online social networks.

    PubMed

    Kleineberg, Kaj-Kolja; Boguñá, Marián

    2016-04-27

    The overwhelming success of online social networks, the key actors in the Web 2.0 cosmos, has reshaped human interactions globally. To help understand the fundamental mechanisms which determine the fate of online social networks at the system level, we describe the digital world as a complex ecosystem of interacting networks. In this paper, we study the impact of heterogeneity in network fitnesses on the competition between an international network, such as Facebook, and local services. The higher fitness of international networks is induced by their ability to attract users from all over the world, which can then establish social interactions without the limitations of local networks. In other words, inter-country social ties lead to increased fitness of the international network. To study the competition between an international network and local ones, we construct a 1:1000 scale model of the digital world, consisting of the 80 countries with the most Internet users. Under certain conditions, this leads to the extinction of local networks; whereas under different conditions, local networks can persist and even dominate completely. In particular, our model suggests that, with the parameters that best reproduce the empirical overtake of Facebook, this overtake could have not taken place with a significant probability.

  1. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    NASA Astrophysics Data System (ADS)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  2. Density-functional approach to the three-body dispersion interaction based on the exchange dipole moment

    PubMed Central

    Proynov, Emil; Liu, Fenglai; Gan, Zhengting; Wang, Matthew; Kong, Jing

    2015-01-01

    We implement and compute the density functional nonadditive three-body dispersion interaction using a combination of Tang-Karplus formalism and the exchange-dipole moment model of Becke and Johnson. The computation of the C9 dispersion coefficients is done in a non-empirical fashion. The obtained C9 values of a series of noble atom triplets agree well with highly accurate values in the literature. We also calculate the C9 values for a series of benzene trimers and find a good agreement with high-level ab initio values reported recently in the literature. For the question of damping of the three-body dispersion at short distances, we propose two damping schemes and optimize them based on the benzene trimers data, and the fitted analytic potentials of He3 and Ar3 trimers fitted to the results of high-level wavefunction theories available from the literature. Both damping schemes respond well to the optimization of two parameters. PMID:26328836

  3. Emergent neutrality drives phytoplankton species coexistence

    PubMed Central

    Segura, Angel M.; Calliari, Danilo; Kruk, Carla; Conde, Daniel; Bonilla, Sylvia; Fort, Hugo

    2011-01-01

    The mechanisms that drive species coexistence and community dynamics have long puzzled ecologists. Here, we explain species coexistence, size structure and diversity patterns in a phytoplankton community using a combination of four fundamental factors: organism traits, size-based constraints, hydrology and species competition. Using a ‘microscopic’ Lotka–Volterra competition (MLVC) model (i.e. with explicit recipes to compute its parameters), we provide a mechanistic explanation of species coexistence along a niche axis (i.e. organismic volume). We based our model on empirically measured quantities, minimal ecological assumptions and stochastic processes. In nature, we found aggregated patterns of species biovolume (i.e. clumps) along the volume axis and a peak in species richness. Both patterns were reproduced by the MLVC model. Observed clumps corresponded to niche zones (volumes) where species fitness was highest, or where fitness was equal among competing species. The latter implies the action of equalizing processes, which would suggest emergent neutrality as a plausible mechanism to explain community patterns. PMID:21177680

  4. Analysis of urinary excretion data from three plutonium-contaminated wounds at Los Alamos National Laboratory

    DOE PAGES

    Poudel, Deepesh; Klumpp, John A.; Waters, Tom L.; ...

    2017-07-14

    The NCRP-156 Report proposes seven different biokinetic models for the wound cases depending on the physicochemistry of the contaminant. Because the models were heavily based on experimental animal data, the authors of the report encouraged application and validation of the models using bioassay data from actual human exposures. Each of the wound models was applied to three plutonium-contaminated wounds, and the models resulted in a good agreement to only one of the cases. We then applied a simpler biokinetic model structure to the bioassay data and showed that fitting the transfer rates from this model structure yielded better agreement withmore » the data than does the best-fitting NCRP-156 model. Because the biokinetics of radioactive material in each wound is different, it is impractical to propose a discrete set of model parameters to describe the biokinetics of radionuclides in all wounds, and thus each wound should be treated empirically.« less

  5. Density-functional approach to the three-body dispersion interaction based on the exchange dipole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proynov, Emil; Wang, Matthew; Kong, Jing, E-mail: jing.kong@mtsu.edu

    We implement and compute the density functional nonadditive three-body dispersion interaction using a combination of Tang-Karplus formalism and the exchange-dipole moment model of Becke and Johnson. The computation of the C{sub 9} dispersion coefficients is done in a non-empirical fashion. The obtained C{sub 9} values of a series of noble atom triplets agree well with highly accurate values in the literature. We also calculate the C{sub 9} values for a series of benzene trimers and find a good agreement with high-level ab initio values reported recently in the literature. For the question of damping of the three-body dispersion at shortmore » distances, we propose two damping schemes and optimize them based on the benzene trimers data, and the fitted analytic potentials of He{sub 3} and Ar{sub 3} trimers fitted to the results of high-level wavefunction theories available from the literature. Both damping schemes respond well to the optimization of two parameters.« less

  6. An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu

    We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe givesmore » a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.« less

  7. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  8. An easy-to-parameterise physics-informed battery model and its application towards lithium-ion battery cell design, diagnosis, and degradation

    NASA Astrophysics Data System (ADS)

    Merla, Yu; Wu, Billy; Yufit, Vladimir; Martinez-Botas, Ricardo F.; Offer, Gregory J.

    2018-04-01

    Accurate diagnosis of lithium ion battery state-of-health (SOH) is of significant value for many applications, to improve performance, extend life and increase safety. However, in-situ or in-operando diagnosis of SOH often requires robust models. There are many models available however these often require expensive-to-measure ex-situ parameters and/or contain unmeasurable parameters that were fitted/assumed. In this work, we have developed a new empirically parameterised physics-informed equivalent circuit model. Its modular construction and low-cost parametrisation requirements allow end users to parameterise cells quickly and easily. The model is accurate to 19.6 mV for dynamic loads without any global fitting/optimisation, only that of the individual elements. The consequences of various degradation mechanisms are simulated, and the impact of a degraded cell on pack performance is explored, validated by comparison with experiment. Results show that an aged cell in a parallel pack does not have a noticeable effect on the available capacity of other cells in the pack. The model shows that cells perform better when electrodes are more porous towards the separator and have a uniform particle size distribution, validated by comparison with published data. The model is provided with this publication for readers to use.

  9. A comment on priors for Bayesian occupancy models.

    PubMed

    Northrup, Joseph M; Gerber, Brian D

    2018-01-01

    Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are "uninformative" or "vague", such priors can easily be unintentionally highly informative. Here we report on how the specification of a "vague" normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts.

  10. Shape selection in Landsat time series: a tool for monitoring forest dynamics.

    PubMed

    Moisen, Gretchen G; Meyer, Mary C; Schroeder, Todd A; Liao, Xiyue; Schleeweis, Karen G; Freeman, Elizabeth A; Toney, Chris

    2016-10-01

    We present a new methodology for fitting nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral band or index of choice in temporal Landsat data, our method delivers a smoothed rendition of the trajectory constrained to behave in an ecologically sensible manner, reflecting one of seven possible 'shapes'. It also provides parameters summarizing the patterns of each change including year of onset, duration, magnitude, and pre- and postchange rates of growth or recovery. Through a case study featuring fire, harvest, and bark beetle outbreak, we illustrate how resultant fitted values and parameters can be fed into empirical models to map disturbance causal agent and tree canopy cover changes coincident with disturbance events through time. We provide our code in the r package ShapeSelectForest on the Comprehensive R Archival Network and describe our computational approaches for running the method over large geographic areas. We also discuss how this methodology is currently being used for forest disturbance and attribute mapping across the conterminous United States. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  11. Error catastrophe and phase transition in the empirical fitness landscape of HIV

    NASA Astrophysics Data System (ADS)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-03-01

    We have translated clinical sequence databases of the p6 HIV protein into an empirical fitness landscape quantifying viral replicative capacity as a function of the amino acid sequence. We show that the viral population resides close to a phase transition in sequence space corresponding to an "error catastrophe" beyond which there is lethal accumulation of mutations. Our model predicts that the phase transition may be induced by drug therapies that elevate the mutation rate, or by forcing mutations at particular amino acids. Applying immune pressure to any combination of killer T-cell targets cannot induce the transition, providing a rationale for why the viral protein can exist close to the error catastrophe without sustaining fatal fitness penalties due to adaptive immunity.

  12. Efficiencies for production of atomic nitrogen and oxygen by relativistic proton impact in air

    NASA Technical Reports Server (NTRS)

    Porter, H. S.; Jackman, C. H.; Green, A. E. S.

    1976-01-01

    Relativistic electron and proton impact cross sections are obtained and represented by analytic forms which span the energy range from threshold to 1 GeV. For ionization processes, the Massey-Mohr continuum generalized oscillator strength surface is parameterized. Parameters are determined by simultaneous fitting to (1) empirical data, (2) the Bethe sum rule, and (3) doubly differential cross sections for ionization. Branching ratios for dissociation and predissociation from important states of N2 and O2 are determined. The efficiency for the production of atomic nitrogen and oxygen by protons with kinetic energy less than 1 GeV is determined using these branching ratio and cross section assignments.

  13. Parametrization of semiempirical models against ab initio crystal data: evaluation of lattice energies of nitrate salts.

    PubMed

    Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav

    2005-09-01

    A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.

  14. Effect of Diffuse Backscatter in Cassini Datasets on the Inferred Properties of Titan's surface

    NASA Astrophysics Data System (ADS)

    Sultan-Salem, A. K.; Tyler, G. L.

    2006-12-01

    Microwave (2.18 cm-λ) backscatter data for the surface of Titan obtained with the Cassini Radar instrument exhibit a significant diffuse scattering component. An empirical scattering law of the form Acos^{n}θ, with free parameters A and n, is often employed to model diffuse scattering, which may involve one or more unidentified mechanisms and processes, such as volume scattering and scattering from surface structure that is much smaller than the electromagnetic wavelength used to probe the surface. The cosine law in general is not explicit in its dependence on either the surface structure or electromagnetic parameters. Further, the cosine law often is only a poor representation of the observed diffuse scattering, as can be inferred from computation of standard goodness-of-fit measures such as the statistical significance. We fit four Cassini datasets (TA Inbound and Outbound, T3 Outbound, and T8 Inbound) with a linear combination of a cosine law and a generalized fractal-based quasi-specular scattering law (A. K. Sultan- Salem and G. L. Tyler, J. Geophys. Res., 111, E06S08, doi:10.1029/2005JE002540, 2006), in order to demonstrate how the presence of diffuse scattering increases considerably the uncertainty in surface parameters inferred from the quasi-specular component, typically the dielectric constant of the surface material and the surface root-mean-square slope. This uncertainty impacts inferences concerning the physical properties of the surfaces that display mixed scattering properties.

  15. The dynamics of adapting, unregulated populations and a modified fundamental theorem.

    PubMed

    O'Dwyer, James P

    2013-01-06

    A population in a novel environment will accumulate adaptive mutations over time, and the dynamics of this process depend on the underlying fitness landscape: the fitness of and mutational distance between possible genotypes in the population. Despite its fundamental importance for understanding the evolution of a population, inferring this landscape from empirical data has been problematic. We develop a theoretical framework to describe the adaptation of a stochastic, asexual, unregulated, polymorphic population undergoing beneficial, neutral and deleterious mutations on a correlated fitness landscape. We generate quantitative predictions for the change in the mean fitness and within-population variance in fitness over time, and find a simple, analytical relationship between the distribution of fitness effects arising from a single mutation, and the change in mean population fitness over time: a variant of Fisher's 'fundamental theorem' which explicitly depends on the form of the landscape. Our framework can therefore be thought of in three ways: (i) as a set of theoretical predictions for adaptation in an exponentially growing phase, with applications in pathogen populations, tumours or other unregulated populations; (ii) as an analytically tractable problem to potentially guide theoretical analysis of regulated populations; and (iii) as a basis for developing empirical methods to infer general features of a fitness landscape.

  16. Latent Trait Theory Approach to Measuring Person-Organization Fit: Conceptual Rationale and Empirical Evaluation

    ERIC Educational Resources Information Center

    Chernyshenko, Oleksandr S.; Stark, Stephen; Williams, Alex

    2009-01-01

    The purpose of this article is to offer a new approach to measuring person-organization (P-O) fit, referred to here as "Latent fit." Respondents were administered unidimensional forced choice items and were asked to choose the statement in each pair that better reflected the correspondence between their values and those of the…

  17. Building Spiritual Fitness in the Army: An Innovative Approach to a Vital Aspect of Human Development

    ERIC Educational Resources Information Center

    Pargament, Kenneth I.; Sweeney, Patrick J.

    2011-01-01

    This article describes the development of the spiritual fitness component of the Army's Comprehensive Soldier Fitness (CSF) program. Spirituality is defined in the human sense as the journey people take to discover and realize their essential selves and higher order aspirations. Several theoretically and empirically based reasons are articulated…

  18. Temporal variation and scaling of parameters for a monthly hydrologic model

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang

    2018-03-01

    The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.

  19. Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.

    2014-06-01

    We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and diagnostic figures, are included in the DV report and one-page report summary, which are accessible by the science community at NASA Exoplanet Archive. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  20. Overcoming equifinality: Leveraging long time series for stream metabolism estimation

    USGS Publications Warehouse

    Appling, Alison; Hall, Robert O.; Yackulic, Charles B.; Arroita, Maite

    2018-01-01

    The foundational ecosystem processes of gross primary production (GPP) and ecosystem respiration (ER) cannot be measured directly but can be modeled in aquatic ecosystems from subdaily patterns of oxygen (O2) concentrations. Because rivers and streams constantly exchange O2 with the atmosphere, models must either use empirical estimates of the gas exchange rate coefficient (K600) or solve for all three parameters (GPP, ER, and K600) simultaneously. Empirical measurements of K600 require substantial field work and can still be inaccurate. Three-parameter models have suffered from equifinality, where good fits to O2 data are achieved by many different parameter values, some unrealistic. We developed a new three-parameter, multiday model that ensures similar values for K600 among days with similar physical conditions (e.g., discharge). Our new model overcomes the equifinality problem by (1) flexibly relating K600 to discharge while permitting moderate daily deviations and (2) avoiding the oft-violated assumption that residuals in O2 predictions are uncorrelated. We implemented this hierarchical state-space model and several competitor models in an open-source R package, streamMetabolizer. We then tested the models against both simulated and field data. Our new model reduces error by as much as 70% in daily estimates of K600, GPP, and ER. Further, accuracy benefits of multiday data sets require as few as 3 days of data. This approach facilitates more accurate metabolism estimates for more streams and days, enabling researchers to better quantify carbon fluxes, compare streams by their metabolic regimes, and investigate controls on aquatic activity.

  1. Single photon counting linear mode avalanche photodiode technologies

    NASA Astrophysics Data System (ADS)

    Williams, George M.; Huntington, Andrew S.

    2011-10-01

    The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).

  2. Pre-processing by data augmentation for improved ellipse fitting.

    PubMed

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  3. Empirical Assessment of the Mean Block Volume of Rock Masses Intersected by Four Joint Sets

    NASA Astrophysics Data System (ADS)

    Morelli, Gian Luca

    2016-05-01

    The estimation of a representative value for the rock block volume ( V b) is of huge interest in rock engineering in regards to rock mass characterization purposes. However, while mathematical relationships to precisely estimate this parameter from the spacing of joints can be found in literature for rock masses intersected by three dominant joint sets, corresponding relationships do not actually exist when more than three sets occur. In these cases, a consistent assessment of V b can only be achieved by directly measuring the dimensions of several representative natural rock blocks in the field or by means of more sophisticated 3D numerical modeling approaches. However, Palmström's empirical relationship based on the volumetric joint count J v and on a block shape factor β is commonly used in the practice, although strictly valid only for rock masses intersected by three joint sets. Starting from these considerations, the present paper is primarily intended to investigate the reliability of a set of empirical relationships linking the block volume with the indexes most commonly used to characterize the degree of jointing in a rock mass (i.e. the J v and the mean value of the joint set spacings) specifically applicable to rock masses intersected by four sets of persistent discontinuities. Based on the analysis of artificial 3D block assemblies generated using the software AutoCAD, the most accurate best-fit regression has been found between the mean block volume (V_{{{{b}}_{{m}} }}) of tested rock mass samples and the geometric mean value of the spacings of the joint sets delimiting blocks; thus, indicating this mean value as a promising parameter for the preliminary characterization of the block size. Tests on field outcrops have demonstrated that the proposed empirical methodology has the potential of predicting the mean block volume of multiple-set jointed rock masses with an acceptable accuracy for common uses in most practical rock engineering applications.

  4. Design of a secondary ionization target for direct production of a C- beam from CO2 pulses for online AMS.

    PubMed

    Salazar, Gary; Ognibene, Ted

    2013-01-01

    We designed and optimized a novel device "target" that directs a CO 2 gas pulse onto a Ti surface where a Cs + beam generates C - from the CO 2 . This secondary ionization target enables an accelerator mass spectrometer to ionize pulses of CO 2 in the negative mode to measure 14 C/ 12 C isotopic ratios in real time. The design of the targets were based on computational flow dynamics, ionization mechanism and empirical optimization. As part of the ionization mechanism, the adsorption of CO 2 on the Ti surface was fitted with the Jovanovic-Freundlich isotherm model using empirical and simulation data. The inferred adsorption constants were in good agreement with other works. The empirical optimization showed that amount of injected carbon and the flow speed of the helium carrier gas improve the ionization efficiency and the amount of 12 C - produced until reaching a saturation point. Linear dynamic range between 150 and 1000 ng of C and optimum carrier gas flow speed of around 0.1 mL/min were shown. It was also shown that the ionization depends on the area of the Ti surface and Cs + beam cross-section. A range of ionization efficiency of 1-2.5% was obtained by optimizing the described parameters.

  5. Enhancement of Oxygen Mass Transfer and Gas Holdup Using Palm Oil in Stirred Tank Bioreactors with Xanthan Solutions as Simulated Viscous Fermentation Broths

    PubMed Central

    Mohd Sauid, Suhaila; Huey Ling, Tan; Veluri, Murthy V. P. S.

    2013-01-01

    Volumetric mass transfer coefficient (k L a) is an important parameter in bioreactors handling viscous fermentations such as xanthan gum production, as it affects the reactor performance and productivity. Published literatures showed that adding an organic phase such as hydrocarbons or vegetable oil could increase the k L a. The present study opted for palm oil as the organic phase as it is plentiful in Malaysia. Experiments were carried out to study the effect of viscosity, gas holdup, and k L a on the xanthan solution with different palm oil fractions by varying the agitation rate and aeration rate in a 5 L bench-top bioreactor fitted with twin Rushton turbines. Results showed that 10% (v/v) of palm oil raised the k L a of xanthan solution by 1.5 to 3 folds with the highest k L a value of 84.44 h−1. It was also found that palm oil increased the gas holdup and viscosity of the xanthan solution. The k L a values obtained as a function of power input, superficial gas velocity, and palm oil fraction were validated by two different empirical equations. Similarly, the gas holdup obtained as a function of power input and superficial gas velocity was validated by another empirical equation. All correlations were found to fit well with higher determination coefficients. PMID:24350269

  6. A Probabilistic, Dynamic, and Attribute-wise Model of Intertemporal Choice

    PubMed Central

    Dai, Junyi; Busemeyer, Jerome R.

    2014-01-01

    Most theoretical and empirical research on intertemporal choice assumes a deterministic and static perspective, leading to the widely adopted delay discounting models. As a form of preferential choice, however, intertemporal choice may be generated by a stochastic process that requires some deliberation time to reach a decision. We conducted three experiments to investigate how choice and decision time varied as a function of manipulations designed to examine the delay duration effect, the common difference effect, and the magnitude effect in intertemporal choice. The results, especially those associated with the delay duration effect, challenged the traditional deterministic and static view and called for alternative approaches. Consequently, various static or dynamic stochastic choice models were explored and fit to the choice data, including alternative-wise models derived from the traditional exponential or hyperbolic discount function and attribute-wise models built upon comparisons of direct or relative differences in money and delay. Furthermore, for the first time, dynamic diffusion models, such as those based on decision field theory, were also fit to the choice and response time data simultaneously. The results revealed that the attribute-wise diffusion model with direct differences, power transformations of objective value and time, and varied diffusion parameter performed the best and could account for all three intertemporal effects. In addition, the empirical relationship between choice proportions and response times was consistent with the prediction of diffusion models and thus favored a stochastic choice process for intertemporal choice that requires some deliberation time to make a decision. PMID:24635188

  7. Surface daytime net radiation estimation using artificial neural networks

    DOE PAGES

    Jiang, Bo; Zhang, Yi; Liang, Shunlin; ...

    2014-11-11

    Net all-wave surface radiation (R n) is one of the most important fundamental parameters in various applications. However, conventional R n measurements are difficult to collect because of the high cost and ongoing maintenance of recording instruments. Therefore, various empirical R n estimation models have been developed. This study presents the results of two artificial neural network (ANN) models (general regression neural networks (GRNN) and Neuroet) to estimate R n globally from multi-source data, including remotely sensed products, surface measurements, and meteorological reanalysis products. R n estimates provided by the two ANNs were tested against in-situ radiation measurements obtained frommore » 251 global sites between 1991–2010 both in global mode (all data were used to fit the models) and in conditional mode (the data were divided into four subsets and the models were fitted separately). Based on the results obtained from extensive experiments, it has been proved that the two ANNs were superior to linear-based empirical models in both global and conditional modes and that the GRNN performed better and was more stable than Neuroet. The GRNN estimates had a determination coefficient (R 2) of 0.92, a root mean square error (RMSE) of 34.27 W·m –2 , and a bias of –0.61 W·m –2 in global mode based on the validation dataset. In conclusion, ANN methods are a potentially powerful tool for global R n estimation.« less

  8. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  9. EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice

    NASA Astrophysics Data System (ADS)

    Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.

    2016-12-01

    The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.

  10. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Albert

    2005-01-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…

  11. Breaking down "Healthism": Barriers to Health and Fitness as Identified by Immigrant Youth in St. John's, NL, Canada

    ERIC Educational Resources Information Center

    Shea, Jennifer M.; Beausoleil, Natalie

    2012-01-01

    In this article, we challenge dominant health and fitness discourses which stress individual responsibility in the attainment of these statuses. We examine the results of an empirical study exploring how a group of 15 Canadian immigrant youth, aged 12-17, discursively construct notions of health and fitness. Qualitative data were collected through…

  12. Executive Search Firms' Consideration of Person-Organization Fit in College and University Presidential Searches

    ERIC Educational Resources Information Center

    Turpin, James Christopher

    2013-01-01

    Largely what is known about P-O Fit stems from research conducted in business organizations. Surprisingly with such an important position as a college or university president, P-O Fit has not been empirically studied in the presidential selection process, much less from the perspective of the executive search firms that conduct these searches.…

  13. Improving intermolecular interactions in DFTB3 using extended polarization from chemical-potential equalization

    PubMed Central

    Christensen, Anders S.; Elstner, Marcus; Cui, Qiang

    2015-01-01

    Semi-empirical quantum mechanical methods traditionally expand the electron density in a minimal, valence-only electron basis set. The minimal-basis approximation causes molecular polarization to be underestimated, and hence intermolecular interaction energies are also underestimated, especially for intermolecular interactions involving charged species. In this work, the third-order self-consistent charge density functional tight-binding method (DFTB3) is augmented with an auxiliary response density using the chemical-potential equalization (CPE) method and an empirical dispersion correction (D3). The parameters in the CPE and D3 models are fitted to high-level CCSD(T) reference interaction energies for a broad range of chemical species, as well as dipole moments calculated at the DFT level; the impact of including polarizabilities of molecules in the parameterization is also considered. Parameters for the elements H, C, N, O, and S are presented. The Root Mean Square Deviation (RMSD) interaction energy is improved from 6.07 kcal/mol to 1.49 kcal/mol for interactions with one charged species, whereas the RMSD is improved from 5.60 kcal/mol to 1.73 for a set of 9 salt bridges, compared to uncorrected DFTB3. For large water clusters and complexes that are dominated by dispersion interactions, the already satisfactory performance of the DFTB3-D3 model is retained; polarizabilities of neutral molecules are also notably improved. Overall, the CPE extension of DFTB3-D3 provides a more balanced description of different types of non-covalent interactions than Neglect of Diatomic Differential Overlap type of semi-empirical methods (e.g., PM6-D3H4) and PBE-D3 with modest basis sets. PMID:26328834

  14. Improving intermolecular interactions in DFTB3 using extended polarization from chemical-potential equalization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Anders S., E-mail: andersx@chem.wisc.edu, E-mail: cui@chem.wisc.edu; Cui, Qiang, E-mail: andersx@chem.wisc.edu, E-mail: cui@chem.wisc.edu; Elstner, Marcus

    Semi-empirical quantum mechanical methods traditionally expand the electron density in a minimal, valence-only electron basis set. The minimal-basis approximation causes molecular polarization to be underestimated, and hence intermolecular interaction energies are also underestimated, especially for intermolecular interactions involving charged species. In this work, the third-order self-consistent charge density functional tight-binding method (DFTB3) is augmented with an auxiliary response density using the chemical-potential equalization (CPE) method and an empirical dispersion correction (D3). The parameters in the CPE and D3 models are fitted to high-level CCSD(T) reference interaction energies for a broad range of chemical species, as well as dipole moments calculatedmore » at the DFT level; the impact of including polarizabilities of molecules in the parameterization is also considered. Parameters for the elements H, C, N, O, and S are presented. The Root Mean Square Deviation (RMSD) interaction energy is improved from 6.07 kcal/mol to 1.49 kcal/mol for interactions with one charged species, whereas the RMSD is improved from 5.60 kcal/mol to 1.73 for a set of 9 salt bridges, compared to uncorrected DFTB3. For large water clusters and complexes that are dominated by dispersion interactions, the already satisfactory performance of the DFTB3-D3 model is retained; polarizabilities of neutral molecules are also notably improved. Overall, the CPE extension of DFTB3-D3 provides a more balanced description of different types of non-covalent interactions than Neglect of Diatomic Differential Overlap type of semi-empirical methods (e.g., PM6-D3H4) and PBE-D3 with modest basis sets.« less

  15. Fractal Theory for Permeability Prediction, Venezuelan and USA Wells

    NASA Astrophysics Data System (ADS)

    Aldana, Milagrosa; Altamiranda, Dignorah; Cabrera, Ana

    2014-05-01

    Inferring petrophysical parameters such as permeability, porosity, water saturation, capillary pressure, etc, from the analysis of well logs or other available core data has always been of critical importance in the oil industry. Permeability in particular, which is considered to be a complex parameter, has been inferred using both empirical and theoretical techniques. The main goal of this work is to predict permeability values on different wells using Fractal Theory, based on a method proposed by Pape et al. (1999). This approach uses the relationship between permeability and the geometric form of the pore space of the rock. This method is based on the modified equation of Kozeny-Carman and a fractal pattern, which allows determining permeability as a function of the cementation exponent, porosity and the fractal dimension. Data from wells located in Venezuela and the United States of America are analyzed. Employing data of porosity and permeability obtained from core samples, and applying the Fractal Theory method, we calculated the prediction equations for each well. At the beginning, this was achieved by training with 50% of the data available for each well. Afterwards, these equations were tested inferring over 100% of the data to analyze possible trends in their distribution. This procedure gave excellent results in all the wells in spite of their geographic distance, generating permeability models with the potential to accurately predict permeability logs in the remaining parts of the well for which there are no core samples, using even porority logs. Additionally, empirical models were used to determine permeability and the results were compared with those obtained by applying the fractal method. The results indicated that, although there are empirical equations that give a proper adjustment, the prediction results obtained using fractal theory give a better fit to the core reference data.

  16. Cortical region-specific sleep homeostasis in mice: effects of time of day and waking experience.

    PubMed

    Guillaumin, Mathilde C C; McKillop, Laura E; Cui, Nanyi; Fisher, Simon P; Foster, Russell G; de Vos, Maarten; Peirson, Stuart N; Achermann, Peter; Vyazovskiy, Vladyslav V

    2018-04-25

    Sleep-wake history, wake behaviours, lighting conditions and circadian time influence sleep, but neither their relative contribution, nor the underlying mechanisms are fully understood. The dynamics of EEG slow-wave activity (SWA) during sleep can be described using the two-process model, whereby the parameters of homeostatic Process S are estimated using empirical EEG SWA (0.5-4 Hz) in non-rapid eye movement sleep (NREM), and the 24-h distribution of vigilance states. We hypothesised that the influence of extrinsic factors on sleep homeostasis, such as the time of day or wake behaviour, would manifest in systematic deviations between empirical SWA and model predictions. To test this hypothesis, we performed parameter estimation and tested model predictions using NREM SWA derived from continuous EEG recordings from the frontal and occipital cortex in mice. The animals showed prolonged wake periods, followed by consolidated sleep, both during the dark and light phases, and wakefulness primarily consisted of voluntary wheel running, learning a new motor skill or novel object exploration. Simulated SWA matched empirical levels well across conditions, and neither waking experience nor time of day had a significant influence on the fit between data and simulation. However, we consistently observed that Process S declined during sleep significantly faster in the frontal than in the occipital area of the neocortex. The striking resilience of the model to specific wake behaviours, lighting conditions and time of day suggests that intrinsic factors underpinning the dynamics of Process S are robust to extrinsic influences, despite their major role in shaping the overall amount and distribution of vigilance states across 24 h.

  17. Inference of gene regulatory networks from genome-wide knockout fitness data

    PubMed Central

    Wang, Liming; Wang, Xiaodong; Arkin, Adam P.; Samoilov, Michael S.

    2013-01-01

    Motivation: Genome-wide fitness is an emerging type of high-throughput biological data generated for individual organisms by creating libraries of knockouts, subjecting them to broad ranges of environmental conditions, and measuring the resulting clone-specific fitnesses. Since fitness is an organism-scale measure of gene regulatory network behaviour, it may offer certain advantages when insights into such phenotypical and functional features are of primary interest over individual gene expression. Previous works have shown that genome-wide fitness data can be used to uncover novel gene regulatory interactions, when compared with results of more conventional gene expression analysis. Yet, to date, few algorithms have been proposed for systematically using genome-wide mutant fitness data for gene regulatory network inference. Results: In this article, we describe a model and propose an inference algorithm for using fitness data from knockout libraries to identify underlying gene regulatory networks. Unlike most prior methods, the presented approach captures not only structural, but also dynamical and non-linear nature of biomolecular systems involved. A state–space model with non-linear basis is used for dynamically describing gene regulatory networks. Network structure is then elucidated by estimating unknown model parameters. Unscented Kalman filter is used to cope with the non-linearities introduced in the model, which also enables the algorithm to run in on-line mode for practical use. Here, we demonstrate that the algorithm provides satisfying results for both synthetic data as well as empirical measurements of GAL network in yeast Saccharomyces cerevisiae and TyrR–LiuR network in bacteria Shewanella oneidensis. Availability: MATLAB code and datasets are available to download at http://www.duke.edu/∼lw174/Fitness.zip and http://genomics.lbl.gov/supplemental/fitness-bioinf/ Contact: wangx@ee.columbia.edu or mssamoilov@lbl.gov Supplementary information: Supplementary data are available at Bioinformatics online PMID:23271269

  18. Analysis of PH3 spectra in the Octad range 2733-3660 cm-1

    NASA Astrophysics Data System (ADS)

    Nikitin, A. V.; Ivanova, Y. A.; Rey, M.; Tashkun, S. A.; Toon, G. C.; Sung, K.; Tyuterev, Vl. G.

    2017-12-01

    Improved analysis of positions and intensities of phosphine spectral lines in the Octad region 2733-3660 cm-1 is reported. Some 5768 positions and 1752 intensities were modelled with RMS deviations of 0.00185 cm-1 and 10.9%, respectively. Based on an ab initio potential energy surface, the full Hamiltonian of phosphine nuclear motion was reduced to an effective Hamiltonian using high-order Contact Transformations method adapted to polyads of symmetric top AB3-type molecules with a subsequent empirical optimization of parameters. More than 2000 new ro-vibrational lines were assigned that include transitions for all 13 vibrational Octad sublevels. This new fitting of measured positions and intensities considerably improved the accuracy of line parameters in the calculated database. A comparison of our results with experimental spectra of PNNL showed that the new set of line parameters from this work permits better simulation of observed cross-sections than the HITRAN2012 linelist. In the 2733-3660 cm-1 range, our integrated intensities show a good consistency with recent ab initio variational calculations.

  19. Tight-binding analysis of Si and GaAs ultrathin bodies with subatomic wave-function resolution

    NASA Astrophysics Data System (ADS)

    Tan, Yaohua P.; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy B.; Klimeck, Gerhard

    2015-08-01

    Empirical tight-binding (ETB) methods are widely used in atomistic device simulations. Traditional ways of generating the ETB parameters rely on direct fitting to bulk experiments or theoretical electronic bands. However, ETB calculations based on existing parameters lead to unphysical results in ultrasmall structures like the As-terminated GaAs ultrathin bodies (UTBs). In this work, it is shown that more transferable ETB parameters with a short interaction range can be obtained by a process of mapping ab initio bands and wave functions to ETB models. This process enables the calibration of not only the ETB energy bands but also the ETB wave functions with corresponding ab initio calculations. Based on the mapping process, ETB models of Si and GaAs are parameterized with respect to hybrid functional calculations. Highly localized ETB basis functions are obtained. Both the ETB energy bands and wave functions with subatomic resolution of UTBs show good agreement with the corresponding hybrid functional calculations. The ETB methods can then be used to explain realistically extended devices in nonequilibrium that cannot be tackled with ab initio methods.

  20. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  1. Statistical Optimization of Reactive Plasma Cladding to Synthesize a WC-Reinforced Fe-Based Alloy Coating

    NASA Astrophysics Data System (ADS)

    Wang, Miqi; Zhou, Zehua; Wu, Lintao; Ding, Ying; Xu, Feilong; Wang, Zehua

    2018-04-01

    A new compound Fe-W-C powder for reactive plasma cladding was fabricated by precursor carbonization process using sucrose as a precursor. The application of quadratic general rotary unitized design was highlighted to develop a mathematical model to predict and accomplish the desired surface hardness of plasma-cladded coating. The microstructure and microhardness of the coating with optimal parameters were also investigated. According to the developed empirical model, the optimal process parameters were determined as follows: 1.4 for C/W atomic ratio, 20 wt.% for W content, 130 A for scanning current and 100 mm/min (1.67 mm/s) for scanning rate. The confidence level of the model was 99% according to the results of the F-test and lack-of-fit test. Microstructural study showed that the dendritic structure was comprised of a mechanical mixture of α-Fe and carbides, while the interdendritic structure was a eutectic of α-Fe and carbides in the composite coating with optimal parameters. WC phase generation can be confirmed from the XRD pattern. Due to good preparation parameters, the average microhardness of cladded coating can reach 1120 HV0.1, which was four times the substrate microhardness.

  2. Decay analysis of compound nuclei formed in reactions with exotic neutron-rich 9Li projectile and the synthesis of 217At* within the dynamical cluster-decay model

    NASA Astrophysics Data System (ADS)

    Kaur, Arshdeep; Kaushal, Pooja; Hemdeep; Gupta, Raj K.

    2018-01-01

    The decay of various compound nuclei formed via exotic neutron-rich 9Li projectile is studied within the dynamical cluster-decay model (DCM). Following the earlier work of one of us (RKG) and collaborators (M. Kaur et al. (2015) [1]), for an empirically fixed neck-length parameter ΔRemp, the only parameter in the DCM, at a given incident laboratory energy ELab, we are able to fit almost exactly the (total) fusion cross section σfus =∑x=16σxn for 9Li projectile on 208Pb and other targets, with σfus depending strongly on the target mass of the most abundant isotope and its (magic) shell structure. This result shows the predictable nature of the DCM. The neck-length parameter ΔRemp is fixed empirically for the decay of 217At* formed in 9Li + 208Pb reaction at a fixed laboratory energy ELab, and then the total fusion cross section σfus calculated for all other reactions using 9Li as a projectile on different targets. Apparently, this procedure could be used to predict σfus for 9Li-induced reactions where experimental data are not available. Furthermore, optimum choice of "cold" target-projectile combinations, forming "hot" compact configurations, are predicted for the synthesis of compound nucleus 217At* with 8Li + 209Pb as one of the target-projectile combination, or another (t , p) combination 48Ca + 169Tb, with a doubly magic 48Ca, as the best possibility.

  3. On The Computation Of The Best-fit Okada-type Tsunami Source

    NASA Astrophysics Data System (ADS)

    Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A.

    2017-12-01

    The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz.

  4. Star Count Density Profiles and Structural Parameters of 26 Galactic Globular Clusters

    NASA Astrophysics Data System (ADS)

    Miocchi, P.; Lanzoni, B.; Ferraro, F. R.; Dalessandro, E.; Vesperini, E.; Pasquato, M.; Beccari, G.; Pallanca, C.; Sanna, N.

    2013-09-01

    We used an appropriate combination of high-resolution Hubble Space Telescope observations and wide-field, ground-based data to derive the radial stellar density profiles of 26 Galactic globular clusters from resolved star counts (which can be all freely downloaded on-line). With respect to surface brightness (SB) profiles (which can be biased by the presence of sparse, bright stars), star counts are considered to be the most robust and reliable tool to derive cluster structural parameters. For each system, a detailed comparison with both King and Wilson models has been performed and the most relevant best-fit parameters have been obtained. This collection of data represents the largest homogeneous catalog collected so far of star count profiles and structural parameters derived therefrom. The analysis of the data of our catalog has shown that (1) the presence of the central cusps previously detected in the SB profiles of NGC 1851, M13, and M62 is not confirmed; (2) the majority of clusters in our sample are fit equally well by the King and the Wilson models; (3) we confirm the known relationship between cluster size (as measured by the effective radius) and galactocentric distance; (4) the ratio between the core and the effective radii shows a bimodal distribution, with a peak at ~0.3 for about 80% of the clusters and a secondary peak at ~0.6 for the remaining 20%. Interestingly, the main peak turns out to be in agreement with that expected from simulations of cluster dynamical evolution and the ratio between these two radii correlates well with an empirical dynamical-age indicator recently defined from the observed shape of blue straggler star radial distribution, thus suggesting that no exotic mechanisms of energy generation are needed in the cores of the analyzed clusters.

  5. Empirical molecular-dynamics study of diffusion in liquid semiconductors

    NASA Astrophysics Data System (ADS)

    Yu, W.; Wang, Z. Q.; Stroud, D.

    1996-11-01

    We report the results of an extensive molecular-dynamics study of diffusion in liquid Si and Ge (l-Si and l-Ge) and of impurities in l-Ge, using empirical Stillinger-Weber (SW) potentials with several choices of parameters. We use a numerical algorithm in which the three-body part of the SW potential is decomposed into products of two-body potentials, thereby permitting the study of large systems. One choice of SW parameters agrees very well with the observed l-Ge structure factors. The diffusion coefficients D(T) at melting are found to be approximately 6.4×10-5 cm2/s for l-Si, in good agreement with previous calculations, and about 4.2×10-5 and 4.6×10-5 cm2/s for two models of l-Ge. In all cases, D(T) can be fitted to an activated temperature dependence, with activation energies Ed of about 0.42 eV for l-Si, and 0.32 or 0.26 eV for two models of l-Ge, as calculated from either the Einstein relation or from a Green-Kubo-type integration of the velocity autocorrelation function. D(T) for Si impurities in l-Ge is found to be very similar to the self-diffusion coefficient of l-Ge. We briefly discuss possible reasons why the SW potentials give D(T)'s substantially lower than ab initio predictions.

  6. Optical-model potential for electron and positron elastic scattering by atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvat, Francesc

    2003-07-01

    An optical-model potential for systematic calculations of elastic scattering of electrons and positrons by atoms and positive ions is proposed. The electrostatic interaction is determined from the Dirac-Hartree-Fock self-consistent atomic electron density. In the case of electron projectiles, the exchange interaction is described by means of the local-approximation of Furness and McCarthy. The correlation-polarization potential is obtained by combining the correlation potential derived from the local density approximation with a long-range polarization interaction, which is represented by means of a Buckingham potential with an empirical energy-dependent cutoff parameter. The absorption potential is obtained from the local-density approximation, using the Born-Ochkurmore » approximation and the Lindhard dielectric function to describe the binary collisions with a free-electron gas. The strength of the absorption potential is adjusted by means of an empirical parameter, which has been determined by fitting available absolute elastic differential cross-section data for noble gases and mercury. The Dirac partial-wave analysis with this optical-model potential provides a realistic description of elastic scattering of electrons and positrons with energies in the range from {approx}100 eV up to {approx}5 keV. At higher energies, correlation-polarization and absorption corrections are small and the usual static-exchange approximation is sufficiently accurate for most practical purposes.« less

  7. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  8. The detailed balance requirement and general empirical formalisms for continuum absorption

    NASA Technical Reports Server (NTRS)

    Ma, Q.; Tipping, R. H.

    1994-01-01

    Two general empirical formalisms are presented for the spectral density which take into account the deviations from the Lorentz line shape in the wing regions of resonance lines. These formalisms satisfy the detailed balance requirement. Empirical line shape functions, which are essential to provide the continuum absorption at different temperatures in various frequency regions for atmospheric transmission codes, can be obtained by fitting to experimental data.

  9. Bridging process-based and empirical approaches to modeling tree growth

    Treesearch

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  10. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  11. The Study of ( n, d) Reaction Cross Sections for New Evaluated Semi-Empirical Formula Using Optical Model

    NASA Astrophysics Data System (ADS)

    Bölükdemir, M. H.; Tel, E.; Okuducu, Ş.; Aydın, A.

    2009-12-01

    Nuclear fusion can be one of the most attractive sources of energy from the viewpoint of safety and minimal environmental impact. The neutron scattering cross sections data have a critical importance on fusion reactor (and in the fusion-fission hybrid) reactors. So, the study of the systematic of ( n, d) etc., reaction cross sections is of great importance in the definition of the excitation function character for reaction taking place on various nuclei at energies up to 20 MeV. In this study, non-elastic cross-sections have been calculated by using optical model for ( n, d) reactions at 14-15 MeV energy. The excitation function character and reaction Q-values depending on the asymmetry term effect for the ( n, d) reaction have been investigated. New coefficients have been obtained and the semi-empirical formulas including optical model non-elastic effects by fitting two parameters for the ( n, d) reaction cross-sections have been suggested. The obtained cross-section formulas with new coefficients have been compared with the available experimental data and discussed.

  12. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  13. Student Background, School Climate, School Disorder, and Student Achievement: An Empirical Study of New York City's Middle Schools

    ERIC Educational Resources Information Center

    Chen, Greg; Weikart, Lynne A.

    2008-01-01

    This study develops and tests a school disorder and student achievement model based upon the school climate framework. The model was fitted to 212 New York City middle schools using the Structural Equations Modeling Analysis method. The analysis shows that the model fits the data well based upon test statistics and goodness of fit indices. The…

  14. Relatedness, conflict, and the evolution of eusociality.

    PubMed

    Liao, Xiaoyun; Rong, Stephen; Queller, David C

    2015-03-01

    The evolution of sterile worker castes in eusocial insects was a major problem in evolutionary theory until Hamilton developed a method called inclusive fitness. He used it to show that sterile castes could evolve via kin selection, in which a gene for altruistic sterility is favored when the altruism sufficiently benefits relatives carrying the gene. Inclusive fitness theory is well supported empirically and has been applied to many other areas, but a recent paper argued that the general method of inclusive fitness was wrong and advocated an alternative population genetic method. The claim of these authors was bolstered by a new model of the evolution of eusociality with novel conclusions that appeared to overturn some major results from inclusive fitness. Here we report an expanded examination of this kind of model for the evolution of eusociality and show that all three of its apparently novel conclusions are essentially false. Contrary to their claims, genetic relatedness is important and causal, workers are agents that can evolve to be in conflict with the queen, and eusociality is not so difficult to evolve. The misleading conclusions all resulted not from incorrect math but from overgeneralizing from narrow assumptions or parameter values. For example, all of their models implicitly assumed high relatedness, but modifying the model to allow lower relatedness shows that relatedness is essential and causal in the evolution of eusociality. Their modeling strategy, properly applied, actually confirms major insights of inclusive fitness studies of kin selection. This broad agreement of different models shows that social evolution theory, rather than being in turmoil, is supported by multiple theoretical approaches. It also suggests that extensive prior work using inclusive fitness, from microbial interactions to human evolution, should be considered robust unless shown otherwise.

  15. Effect on Gaseous Film Cooling of Coolant Injection Through Angled Slots and Normal Holes

    NASA Technical Reports Server (NTRS)

    Papell, S. Stephen

    1960-01-01

    A study was made to determine the effect of coolant injection angularity on gaseous film-cooling effectiveness. In the correlation of experimental data an effective injection angle was defined by a vector summation of the coolant and mainstream gas flows. The cosine of this angle was used as a parameter to empirically develop a corrective term to qualify a correlating equation presented in Technical Note D-130 that was limited to tangential injection of the coolant. Data were also obtained for coolant injection through rows of holes normal to the test plate. The slot correlating equation was adapted to fit these data by the definition of an effective slot height. An additional corrective term was then determined to correlate these data.

  16. MOLECULAR DYNAMICS OF CASCADES OVERLAP IN TUNGSTEN WITH 20-KEV PRIMARY KNOCK-ON ATOMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setyawan, Wahyu; Nandipati, Giridhar; Roche, Kenneth J.

    2015-04-16

    Molecular dynamics simulations are performed to investigate the mutual influence of two subsequent cascades in tungsten. The influence is studied using 20-keV primary knock-on atoms, to induce one cascade after another separated by 15 ps, in a lattice temperature of 1025 K (i.e. 0.25 of the melting temperature of the interatomic potential). The center of mass of the vacancies at the peak damage during the cascade is taken as the location of the cascade. The distance between this location to that of the next cascade is taken as the overlap parameter. Empirical fits describing the number of surviving vacancies andmore » interstitial atoms as a function of overlap are presented.« less

  17. Word diffusion and climate science.

    PubMed

    Bentley, R Alexander; Garnett, Philip; O'Brien, Michael J; Brock, William A

    2012-01-01

    As public and political debates often demonstrate, a substantial disjoint can exist between the findings of science and the impact it has on the public. Using climate-change science as a case example, we reconsider the role of scientists in the information-dissemination process, our hypothesis being that important keywords used in climate science follow "boom and bust" fashion cycles in public usage. Representing this public usage through extraordinary new data on word frequencies in books published up to the year 2008, we show that a classic two-parameter social-diffusion model closely fits the comings and goings of many keywords over generational or longer time scales. We suggest that the fashions of word usage contributes an empirical, possibly regular, correlate to the impact of climate science on society.

  18. Exponential model for option prices: Application to the Brazilian market

    NASA Astrophysics Data System (ADS)

    Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.

    2016-03-01

    In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.

  19. Consistency with synchrotron emission in the bright GRB 160625B observed by Fermi

    NASA Astrophysics Data System (ADS)

    Ravasio, M. E.; Oganesyan, G.; Ghirlanda, G.; Nava, L.; Ghisellini, G.; Pescalli, A.; Celotti, A.

    2018-05-01

    We present time-resolved spectral analysis of prompt emission from GRB 160625B, one of the brightest bursts ever detected by Fermi in its nine years of operations. Standard empirical functions fail to provide an acceptable fit to the GBM spectral data, which instead require the addition of a low-energy break to the fitting function. We introduce a new fitting function, called 2SBPL, consisting of three smoothly connected power laws. Fitting this model to the data, the goodness of the fits significantly improves and the spectral parameters are well constrained. We also test a spectral model that combines non-thermal and thermal (black body) components, but find that the 2SBPL model is systematically favoured. The spectral evolution shows that the spectral break is located around Ebreak 100 keV, while the usual νFν peak energy feature Epeak evolves in the 0.5-6 MeV energy range. The slopes below and above Ebreak are consistent with the values -0.67 and -1.5, respectively, expected from synchrotron emission produced by a relativistic electron population with a low-energy cut-off. If Ebreak is interpreted as the synchrotron cooling frequency, the implied magnetic field in the emitting region is 10 Gauss, i.e. orders of magnitudes smaller than the value expected for a dissipation region located at 1013-14 cm from the central engine. The low ratio between Epeak and Ebreak implies that the radiative cooling is incomplete, contrary to what is expected in strongly magnetized and compact emitting regions.

  20. A General Population Genetic Framework for Antagonistic Selection That Accounts for Demography and Recurrent Mutation

    PubMed Central

    Connallon, Tim; Clark, Andrew G.

    2012-01-01

    Antagonistic selection—where alleles at a locus have opposing effects on male and female fitness (“sexual antagonism”) or between components of fitness (“antagonistic pleiotropy”)—might play an important role in maintaining population genetic variation and in driving phylogenetic and genomic patterns of sexual dimorphism and life-history evolution. While prior theory has thoroughly characterized the conditions necessary for antagonistic balancing selection to operate, we currently know little about the evolutionary interactions between antagonistic selection, recurrent mutation, and genetic drift, which should collectively shape empirical patterns of genetic variation. To fill this void, we developed and analyzed a series of population genetic models that simultaneously incorporate these processes. Our models identify two general properties of antagonistically selected loci. First, antagonistic selection inflates heterozygosity and fitness variance across a broad parameter range—a result that applies to alleles maintained by balancing selection and by recurrent mutation. Second, effective population size and genetic drift profoundly affect the statistical frequency distributions of antagonistically selected alleles. The “efficacy” of antagonistic selection (i.e., its tendency to dominate over genetic drift) is extremely weak relative to classical models, such as directional selection and overdominance. Alleles meeting traditional criteria for strong selection (Nes >> 1, where Ne is the effective population size, and s is a selection coefficient for a given sex or fitness component) may nevertheless evolve as if neutral. The effects of mutation and demography may generate population differences in overall levels of antagonistic fitness variation, as well as molecular population genetic signatures of balancing selection. PMID:22298707

  1. A general population genetic framework for antagonistic selection that accounts for demography and recurrent mutation.

    PubMed

    Connallon, Tim; Clark, Andrew G

    2012-04-01

    Antagonistic selection--where alleles at a locus have opposing effects on male and female fitness ("sexual antagonism") or between components of fitness ("antagonistic pleiotropy")--might play an important role in maintaining population genetic variation and in driving phylogenetic and genomic patterns of sexual dimorphism and life-history evolution. While prior theory has thoroughly characterized the conditions necessary for antagonistic balancing selection to operate, we currently know little about the evolutionary interactions between antagonistic selection, recurrent mutation, and genetic drift, which should collectively shape empirical patterns of genetic variation. To fill this void, we developed and analyzed a series of population genetic models that simultaneously incorporate these processes. Our models identify two general properties of antagonistically selected loci. First, antagonistic selection inflates heterozygosity and fitness variance across a broad parameter range--a result that applies to alleles maintained by balancing selection and by recurrent mutation. Second, effective population size and genetic drift profoundly affect the statistical frequency distributions of antagonistically selected alleles. The "efficacy" of antagonistic selection (i.e., its tendency to dominate over genetic drift) is extremely weak relative to classical models, such as directional selection and overdominance. Alleles meeting traditional criteria for strong selection (N(e)s > 1, where N(e) is the effective population size, and s is a selection coefficient for a given sex or fitness component) may nevertheless evolve as if neutral. The effects of mutation and demography may generate population differences in overall levels of antagonistic fitness variation, as well as molecular population genetic signatures of balancing selection.

  2. Using data to inform soil microbial carbon model structure and parameters

    NASA Astrophysics Data System (ADS)

    Hagerty, S. B.; Schimel, J.

    2016-12-01

    There is increasing consensus that explicitly representing microbial mechanisms in soil carbon models can improve model predictions of future soil carbon stocks. However, which microbial mechanisms must be represented in these new models and how remains under debate. One of the major challenges in developing microbially explicit soil carbon models is that there is little data available to validate model structure. Empirical studies of microbial mechanisms often fail to capture the full range of microbial processes; from the cellular processes that occur within minutes to hours of substrate consumption to community turnover which may occur over weeks or longer. We added isotopically labeled 14C-glucose to soil incubated in the lab and traced its movement into the microbial biomass, carbon dioxide, and K2SO4 extractable carbon pool. We measured the concentration of 14C in each of these pools at 1, 3, 6, 24, and 72 hours and at 7, 14, and 21 days. We used this data to compare data fits among models that match our conceptual understanding of microbial carbon transformations and to estimate microbial parameters that control the fate of soil carbon. Over 90% of the added glucose was consumed within the first hour after it was added and concentration of the label was highest in biomass at this time. After the first hour, the label in biomass declined, with the rate that the label moved from the biomass slowing after 24hours, because of this models representing the microbial biomass as two pools fit best. Recovery of the label decreased with incubation time, from nearly 80% in the first hour to 67% after three weeks, indicating that carbon is moving into unextractable pools in the soil likely as microbial products and necromass sorb to soil particles and that these mechanisms must be represented in microbial models. This data fitting exercise demonstrates how isotopic data can be useful in validating model structure and estimating microbial model parameters. Future studies can apply this inverse modeling approach to compare the response of microbial parameters to changes in environmental conditions.

  3. Experimental study of water desorption isotherms and thin-layer convective drying kinetics of bay laurel leaves

    NASA Astrophysics Data System (ADS)

    Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed

    2016-12-01

    The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.

  4. A comment on priors for Bayesian occupancy models

    PubMed Central

    Gerber, Brian D.

    2018-01-01

    Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are “uninformative” or “vague”, such priors can easily be unintentionally highly informative. Here we report on how the specification of a “vague” normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts. PMID:29481554

  5. An empirical approach to the stopping power of solids and gases for ions from 3Li to 18Ar

    NASA Astrophysics Data System (ADS)

    Paul, Helmut; Schinner, Andreas

    2001-08-01

    A large collection of stopping power data for projectiles from 3Li to 18Ar is investigated as a possible basis for producing a table of stopping powers. We divide the experimental stopping powers for a particular projectile (nuclear charge Z1) by those for alpha particles in the same element, as given in ICRU Report 49. With proper normalization, we then obtain experimental stopping power ratios Srel that lie approximately on a single curve, provided we treat solid and gaseous targets separately, and provided we exclude H 2 and He targets. For every projectile, this curve is then fitted by a 3-parameter sigmoid function Srel= Srel( a, b, c). We find that the three parameters a, b and c depend smoothly on Z1 and can themselves be fitted by suitable functions af, bf and cf of Z1, separately for solid and gaseous targets. The low energy limit (coefficient a) for solids agrees approximately with the prediction by Lindhard and Scharff. We find that agas< asol in almost all cases. Introducing the coefficients af , bf and cf in Srel, we can calculate the stopping power for any ion (3⩽ Z1⩽18), and for any element (except H 2 and He) and any mixture or compound contained in the ICRU table.

  6. VizieR Online Data Catalog: Vela Junior (RX J0852.0-4622) HESS image (HESS+, 2018)

    NASA Astrophysics Data System (ADS)

    H. E. S. S. Collaboration; Abdalla, H.; Abramowski, A.; Aharonian, F.; Ait Benkhali, F.; Akhperjanian, A. G.; Andersson, T.; Anguener, E. O.; Arakawa, M.; Arrieta, M.; Aubert, P.; Backes, M.; Balzer, A.; Barnard, M.; Becherini, Y.; Becker Tjus, J.; Berge, D.; Bernhard, S.; Bernloehr, K.; Blackwell, R.; Boettcher, M.; Boisson, C.; Bolmont, J.; Bordas, P.; Bregeon, J.; Brun, F.; Brun, P.; Bryan, M.; Buechele, M.; Bulik, T.; Capasso, M.; Carr, J.; Casanova, S.; Cerruti, M.; Chakraborty, N.; Chalme-Calvet, R.; Chaves, R. C. G.; Chen, A.; Chevalier, J.; Chretien, M.; Coffaro, M.; Colafrancesco, S.; Cologna, G.; Condon, B.; Conrad, J.; Cui, Y.; Davids, I. D.; Decock, J.; Degrange, B.; Deil, C.; Devin, J.; Dewilt, P.; Dirson, L.; Djannati-Atai, A.; Domainko, W.; Donath, A.; Drury, L. O'c.; Dutson, K.; Dyks, J.; Edwards, T.; Egberts, K.; Eger, P.; Ernenwein, J.-P.; Eschbach, S.; Farnier, C.; Fegan, S.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Foerster, A.; Funk, S.; Fuessling, M.; Gabici, S.; Gajdus, M.; Gallant, Y. A.; Garrigoux, T.; Giavitto, G.; Giebels, B.; Glicenstein, J. F.; Gottschall, D.; Goyal, A.; Grondin, M.-H.; Hahn, J.; Haupt, M.; Hawkes, J.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hervet, O.; Hinton, J. A.; Hofmann, W.; Hoischen, C.; Holler, M.; Horns, D.; Ivascenko, A.; Iwasaki, H.; Jacholkowska, A.; Jamrozy, M.; Janiak, M.; Jankowsky, D.; Jankowsky, F.; Jingo, M.; Jogler, T.; Jouvin, L.; Jung-Richardt, I.; Kastendieck, M. A.; Katarzynski, K.; Katsuragawa, M.; Katz, U.; Kerszberg, D.; Khangulyan, D.; Khelifi, B.; Kieffer, M.; King, J.; Klepser, S.; Klochkov, D.; Kluzniak, W.; Kolitzus, D.; Komin, Nu.; Kosack, K.; Krakau, S.; Kraus, M.; Krueger, P. P.; Laffon, H.; Lamanna, G.; Lau, J.; Lees, J.-P.; Lefaucheur, J.; Lefranc, V.; Lemiere, A.; Lemoine-Goumard, M.; Lenain, J.-P.; Leser, E.; Lohse, T.; Lorentz, M.; Liu, R.; Lopez-Coto, R.; Lypova, I.; Marandon, V.; Marcowith, A.; Mariaud, C.; Marx, R.; Maurin, G.; Maxted, N.; Mayer, M.; Meintjes, P. J.; Meyer, M.; Mitchell, A. M. W.; Moderski, R.; Mohamed, M.; Mohrmann, L.; Mora, K.; Moulin, E.; Murach, T.; Nakashima, S.; de Naurois, M.; Niederwanger, F.; Niemiec J.; Oakes, L.; O'Brien, P.; Odaka, H.; Oettl, S.; Ohm, S.; Ostrowski, M.; Oya, I.; Padovani, M.; Panter, M.; Parsons, R. D.; Paz Arribas, M.; Pekeur, N. W.; Pelletier, G.; Perennes, C.; Petrucci, P.-O.; Peyaud, B.; Piel, Q.; Pita, S.; Poon, H.; Prokhorov, D.; Prokoph, H.; Puehlhofer, G.; Punch, M.; Quirrenbach, A.; Raab, S.; Reimer, A.; Reimer, O.; Renaud, M.; de Los Reyes, R.; Richter, S.; Rieger, F.; Romoli, C.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Saito, S.; Salek, D.; Sanchez, D. A.; Santangelo, A.; Sasaki, M.; Schlickeiser, R.; Schuessler, F.; Schulz, A.; Schwanke, U.; Schwemmer, S.; Seglar-Arroyo, M.; Settimo, M.; Seyffert, A. S.; Shafi, N.; Shilon, I.; Simoni, R.; Sol, H.; Spanier, F.; Spengler, G.; Spies, F.; Stawarz, L.; Steenkamp, R.; Stegmann, C.; Stycz, K.; Sushch, I.; Takahashi, T.; Tavernet, J.-P.; Tavernier, T.; Taylor, A. M.; Terrier, R.; Tibaldo, L.; Tiziani, D.; Tluczykont, M.; Trichard, C.; Tsuji, N.; Tuffs, R.; Uchiyama, Y.; van der, Walt D. J.; van Eldik, C.; van Rensburg, C.; van Soelen, B.; Vasileiadis, G.; Veh, J.; Venter, C.; Viana, A.; Vincent, P.; Vink, J.; Voisin, F.; Voelk, H. J.; Vuillaume, T.; Wadiasingh, Z.; Wagner, S. J.; Wagner, P.; Wagner, R. M.; White, R.; Wierzcholska, A.; Willmann, P.; Woernlein, A.; Wouters, D.; Yang, R.; Zabalza, V.; Zaborov, D.; Zacharias, M.; Zanin, R.; Zdziarski, A. A.; Zech, A.; Zefi, F.; Ziegler, A.; Zywucka, N.

    2018-03-01

    skymap.fit: H.E.S.S. excess skymap in FITS format of the region comprising Vela Junior and its surroundings. The excess map has been corrected for the gradient of exposure and smoothed with a Gaussian function of width 0.08° to match the analysis point spread function, matching the procedure applied to derive the maps in Fig. 1. sp_stat.txt: H.E.S.S. spectral points and fit parameters for Vela Junior (H.E.S.S. data points in Fig. 3 and Tab. A.2 and H.E.S.S. spectral fit parameters in Tab. 4). The errors in this file represent statistical uncertainties at 1 sigma confidence level. The covariance matrix of the fit is also included in the format: c11 c12 c_13 c21 c22 c_23 c31 c32 c_33 where the subindices represent the following parameters of the power-law with exponential cut-off (ECPL) formula in Tab. 2: 1: flux normalization (Phi0) 2: spectral index (Gamma) 3: inverse of the cutoff energy (lambda=1/Ecut) The units for the covariance matrix are the same as for the fit parameters. Notice that, while the fit parameters section of the file shows E_cut as parameter, the fit was done in lambda=1/Ecut; hence the covariance matrix shows the values for lambda in TeV-1. sp_syst.txt: H.E.S.S. spectral points and fit parameters for Vela Junior (H.E.S.S. data points in Fig. 3 and Tab. A.2 and H.E.S.S. spectral fit parameters in Tab. 4). The errors in this file represent systematic uncertainties at 1 sigma confidence level. The integral fluxes for several energy ranges are also included. (4 data files).

  7. Empirical evidence that metabolic theory describes the temperature dependency of within-host parasite dynamics.

    PubMed

    Kirk, Devin; Jones, Natalie; Peacock, Stephanie; Phillips, Jessica; Molnár, Péter K; Krkošek, Martin; Luijckx, Pepijn

    2018-02-01

    The complexity of host-parasite interactions makes it difficult to predict how host-parasite systems will respond to climate change. In particular, host and parasite traits such as survival and virulence may have distinct temperature dependencies that must be integrated into models of disease dynamics. Using experimental data from Daphnia magna and a microsporidian parasite, we fitted a mechanistic model of the within-host parasite population dynamics. Model parameters comprising host aging and mortality, as well as parasite growth, virulence, and equilibrium abundance, were specified by relationships arising from the metabolic theory of ecology. The model effectively predicts host survival, parasite growth, and the cost of infection across temperature while using less than half the parameters compared to modeling temperatures discretely. Our results serve as a proof of concept that linking simple metabolic models with a mechanistic host-parasite framework can be used to predict temperature responses of parasite population dynamics at the within-host level.

  8. BIOB: a mathematical model for the biodegradation of low solubility hydrocarbons.

    PubMed

    Geng, Xiaolong; Boufadel, Michel C; Personna, Yves R; Lee, Ken; Tsao, David; Demicco, Erik D

    2014-06-15

    Modeling oil biodegradation is an important step in predicting the long term fate of oil on beaches. Unfortunately, existing models do not account mechanistically for environmental factors, such as pore water nutrient concentration, affecting oil biodegradation, rather in an empirical way. We present herein a numerical model, BIOB, to simulate the biodegradation of insoluble attached hydrocarbon. The model was used to simulate an experimental oil spill on a sand beach. The biodegradation kinetic parameters were estimated by fitting the model to the experimental data of alkanes and aromatics. It was found that parameter values are comparable to their counterparts for the biodegradation of dissolved organic matter. The biodegradation of aromatics was highly affected by the decay of aromatic biomass, probably due to its low growth rate. Numerical simulations revealed that the biodegradation rate increases by 3-4 folds when the nutrient concentration is increased from 0.2 to 2.0 mg N/L. Published by Elsevier Ltd.

  9. Empirical evidence that metabolic theory describes the temperature dependency of within-host parasite dynamics

    PubMed Central

    Jones, Natalie; Peacock, Stephanie; Phillips, Jessica; Molnár, Péter K.; Krkošek, Martin; Luijckx, Pepijn

    2018-01-01

    The complexity of host–parasite interactions makes it difficult to predict how host–parasite systems will respond to climate change. In particular, host and parasite traits such as survival and virulence may have distinct temperature dependencies that must be integrated into models of disease dynamics. Using experimental data from Daphnia magna and a microsporidian parasite, we fitted a mechanistic model of the within-host parasite population dynamics. Model parameters comprising host aging and mortality, as well as parasite growth, virulence, and equilibrium abundance, were specified by relationships arising from the metabolic theory of ecology. The model effectively predicts host survival, parasite growth, and the cost of infection across temperature while using less than half the parameters compared to modeling temperatures discretely. Our results serve as a proof of concept that linking simple metabolic models with a mechanistic host–parasite framework can be used to predict temperature responses of parasite population dynamics at the within-host level. PMID:29415043

  10. The impulsive hard X-rays from solar flares

    NASA Technical Reports Server (NTRS)

    Leach, J.

    1984-01-01

    A technique for determining the physical arrangement of a solar flare during the impulsive phase was developed based upon a nonthermal model interpretation of the emitted hard X-rays. Accurate values are obtained for the flare parameters, including those which describe the magnetic field structure and the beaming of the energetic electrons, parameters which have hitherto been mostly inaccessible. The X-ray intensity height structure can be described readily with a single expression based upon a semi-empirical fit to the results from many models. Results show that the degree of linear polarization of the X-rays from a flaring loop does not exceed 25 percent and can easily and naturally be as low as the polarization expected from a thermal model. This is a highly significant result in that it supersedes those based upon less thorough calculations of the electron beam dynamics and requires that a reevaluation of hopes of using polarization measurements to discriminate between categories of flare models.

  11. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  12. Normalization of time-series satellite reflectance data to a standard sun-target-sensor geometry using a semi-empirical model

    NASA Astrophysics Data System (ADS)

    Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang

    2017-10-01

    Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.

  13. Tests of Fit for Asymmetric Laplace Distributions with Applications on Financial Data

    NASA Astrophysics Data System (ADS)

    Fragiadakis, Kostas; Meintanis, Simos G.

    2008-11-01

    New goodness-of-fit tests for the family of asymmetric Laplace distributions are constructed. The proposed tests are based on a weighted integral incorporating the empirical characteristic function of suitably standardized data, and can be written in a closed form appropriate for computer implementation. Monte Carlo results show that the new procedure are competitive with classical goodness-of-fit methods. Applications with financial data are also included.

  14. Stillinger-Weber potential for elastic and fracture properties in graphene and carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Hossain, M. Z.; Hao, T.; Silverman, B.

    2018-02-01

    This paper presents a new framework for determining the Stillinger-Weber (SW) potential parameters for modeling fracture in graphene and carbon nanotubes. In addition to fitting the equilibrium material properties, the approach allows fitting the potential to the forcing behavior as well as the mechanical strength of the solid, without requiring ad hoc modification of the nearest-neighbor interactions for avoiding artificial stiffening of the lattice at larger deformation. Consistent with the first-principles results, the potential shows the Young’s modulus of graphene to be isotropic under symmetry-preserving and symmetry-breaking deformation conditions. It also shows the Young’s modulus of carbon nanotubes to be diameter-dependent under symmetry-breaking loading conditions. The potential addresses the key deficiency of existing empirical potentials in reproducing experimentally observed glass-like brittle fracture in graphene and carbon nanotubes. In simulating the entire deformation process leading to fracture, the SW-potential costs several factors less computational time compared to the state-of-the-art interatomic potentials that enables exploration of the fracture processes in large atomistic systems which are inaccessible otherwise.

  15. Analysis and implications of mutational variation.

    PubMed

    Keightley, Peter D; Halligan, Daniel L

    2009-06-01

    Variation from new mutations is important for several questions in quantitative genetics. Key parameters are the genomic mutation rate and the distribution of effects of mutations (DEM), which determine the amount of new quantitative variation that arises per generation from mutation (V(M)). Here, we review methods and empirical results concerning mutation accumulation (MA) experiments that have shed light on properties of mutations affecting quantitative traits. Surprisingly, most data on fitness traits from laboratory assays of MA lines indicate that the DEM is platykurtic in form (i.e., substantially less leptokurtic than an exponential distribution), and imply that most variation is produced by mutations of moderate to large effect. This finding contrasts with results from MA or mutagenesis experiments in which mutational changes to the DNA can be assayed directly, which imply that the vast majority of mutations have very small phenotypic effects, and that the distribution has a leptokurtic form. We compare these findings with recent approaches that attempt to infer the DEM for fitness based on comparing the frequency spectra of segregating nucleotide polymorphisms at putatively neutral and selected sites in population samples. When applied to data for humans and Drosophila, these analyses also indicate that the DEM is strongly leptokurtic. However, by combining the resultant estimates of parameters of the DEM with estimates of the mutation rate per nucleotide, the predicted V(M) for fitness is only a tiny fraction of V(M) observed in MA experiments. This discrepancy can be explained if we postulate that a few deleterious mutations of large effect contribute most of the mutational variation observed in MA experiments and that such mutations segregate at very low frequencies in natural populations, and effectively are never seen in population samples.

  16. Determining Empirical Stellar Masses and Radii from Transits and Gaia Parallaxes as Illustrated by Spitzer Observations of KELT-11b

    NASA Astrophysics Data System (ADS)

    Beatty, Thomas G.; Stevens, Daniel J.; Collins, Karen A.; Colón, Knicole D.; James, David J.; Kreidberg, Laura; Pepper, Joshua; Rodriguez, Joseph E.; Siverd, Robert J.; Stassun, Keivan G.; Kielkopf, John F.

    2017-07-01

    Using the Spitzer Space Telescope, we observed a transit at 3.6 μm of KELT-11b. We also observed three partial planetary transits from the ground. We simultaneously fit these observations, ground-based photometry from Pepper et al., radial velocity data from Pepper et al., and a spectral energy distribution (SED) model using catalog magnitudes and the Hipparcos parallax to the system. The only significant difference between our results and those of Pepper et al. is that we find the orbital period to be shorter by 37 s, 4.73610 ± 0.00003 versus 4.73653 ± 0.00006 days, and we measure a transit center time of {{BJD}}{TDB} 2457483.4310 ± 0.0007, which is 42 minutes earlier than predicted. Using our new photometry, we precisely measure the density of the star KELT-11 to 4%. By combining the parallax and catalog magnitudes of the system, we are able to measure the radius of KELT-11b essentially empirically. Coupled with the stellar density, this gives a parallactic mass and radius of 1.8 {M}⊙ and 2.9 {R}⊙ , which are each approximately 1σ higher than the adopted model-estimated mass and radius. If we conduct the same fit using the expected parallax uncertainty from the final Gaia data release, this difference increases to 4σ. The differences between the model and parallactic masses and radii for KELT-11 demonstrate the role that precise Gaia parallaxes, coupled with simultaneous photometric, radial velocity, and SED fitting, can play in determining stellar and planetary parameters. With high-precision photometry of transiting planets and high-precision Gaia parallaxes, the parallactic mass and radius uncertainties of stars become 1% and 3%, respectively. TESS is expected to discover 60-80 systems where these measurements will be possible. These parallactic mass and radius measurements have uncertainties small enough that they may provide observational input into the stellar models themselves.

  17. Hygrosopicity measurements of aerosol particles in the San Joaquin Valley, CA, Baltimore, MD, and Golden, CO

    NASA Astrophysics Data System (ADS)

    Orozco, Daniel; Beyersdorf, A. J.; Ziemba, L. D.; Berkoff, T.; Zhang, Q.; Delgado, R.; Hennigan, C. J.; Thornhill, K. L.; Young, D. E.; Parworth, C.; Kim, H.; Hoff, R. M.

    2016-06-01

    Aerosol hygroscopicity was investigated using a novel dryer-humidifier system, coupled to a TSI-3563 nephelometer, to obtain the light scattering coefficient (σscat) as a function of relative humidity (RH) in hydration and dehydration modes. The measurements were performed in Porterville, CA (10 January to 6 February 2013), Baltimore, MD (3-30 July 2013), and Golden, CO (12 July to 10 August 2014). Observations in Porterville and Golden were part of the NASA-sponsored Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality project. The measured σscat under varying RH in the three sites was combined with ground aerosol extinction, PM2.5 mass concentrations, and particle composition measurements and compared with airborne observations performed during campaigns. The enhancement factor, f(RH), defined as the ratio of σscat(RH) at a certain RH divided by σscat at a dry value, was used to evaluate the aerosol hygroscopicity. Particles in Porterville showed low average f(RH = 80%) (1.42) which was attributed to the high carbonaceous loading in the region where residential biomass burning and traffic emissions contribute heavily to air pollution. In Baltimore, the high average f(RH = 80%) (2.06) was attributed to the large contribution of SO42- in the region. The lowest water uptake was observed in Golden, with an average f(RH = 80%) = 1.24 where organic carbon dominated the particle loading. Different empirical fits were evaluated using the f(RH) data. The widely used Kasten (gamma) model was found least satisfactory, as it overestimates f(RH) for RH < 75%. A better empirical fit with two power law curve fitting parameters c and k was found to replicate f(RH) accurately from the three sites. The relationship between the organic carbon mass and the species that are affected by RH and f(RH) was also studied and categorized.

  18. Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders

    USGS Publications Warehouse

    Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael

    2015-01-01

    Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.

  19. Bayesian History Matching of Complex Infectious Disease Models Using Emulation: A Tutorial and a Case Study on HIV in Uganda

    PubMed Central

    Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.

    2015-01-01

    Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850

  20. The ACTIVE conceptual framework as a structural equation model

    PubMed Central

    Gross, Alden L.; Payne, Brennan R.; Casanova, Ramon; Davoudzadeh, Pega; Dzierzewski, Joseph M.; Farias, Sarah; Giovannetti, Tania; Ip, Edward H.; Marsiske, Michael; Rebok, George W.; Schaie, K. Warner; Thomas, Kelsey; Willis, Sherry; Jones, Richard N.

    2018-01-01

    Background/Study Context Conceptual frameworks are analytic models at a high level of abstraction. Their operationalization can inform randomized trial design and sample size considerations. Methods The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) conceptual framework was empirically tested using structural equation modeling (N=2,802). ACTIVE was guided by a conceptual framework for cognitive training in which proximal cognitive abilities (memory, inductive reasoning, speed of processing) mediate treatment-related improvement in primary outcomes (everyday problem-solving, difficulty with activities of daily living, everyday speed, driving difficulty), which in turn lead to improved secondary outcomes (health-related quality of life, health service utilization, mobility). Measurement models for each proximal, primary, and secondary outcome were developed and tested using baseline data. Each construct was then combined in one model to evaluate fit (RMSEA, CFI, normalized residuals of each indicator). To expand the conceptual model and potentially inform future trials, evidence of modification of structural model parameters was evaluated by age, years of education, sex, race, and self-rated health status. Results Preconceived measurement models for memory, reasoning, speed of processing, everyday problem-solving, instrumental activities of daily living (IADL) difficulty, everyday speed, driving difficulty, and health-related quality of life each fit well to the data (all RMSEA < .05; all CFI > .95). Fit of the full model was excellent (RMSEA = .038; CFI = .924). In contrast with previous findings from ACTIVE regarding who benefits from training, interaction testing revealed associations between proximal abilities and primary outcomes are stronger on average by nonwhite race, worse health, older age, and less education (p < .005). Conclusions Empirical data confirm the hypothesized ACTIVE conceptual model. Findings suggest that the types of people who show intervention effects on cognitive performance potentially may be different from those with the greatest chance of transfer to real-world activities. PMID:29303475

  1. Leaf and stem economics spectra drive diversity of functional plant traits in a dynamic global vegetation model.

    PubMed

    Sakschewski, Boris; von Bloh, Werner; Boit, Alice; Rammig, Anja; Kattge, Jens; Poorter, Lourens; Peñuelas, Josep; Thonicke, Kirsten

    2015-01-22

    Functional diversity is critical for ecosystem dynamics, stability and productivity. However, dynamic global vegetation models (DGVMs) which are increasingly used to simulate ecosystem functions under global change, condense functional diversity to plant functional types (PFTs) with constant parameters. Here, we develop an individual- and trait-based version of the DGVM LPJmL (Lund-Potsdam-Jena managed Land) called LPJmL- flexible individual traits (LPJmL-FIT) with flexible individual traits) which we apply to generate plant trait maps for the Amazon basin. LPJmL-FIT incorporates empirical ranges of five traits of tropical trees extracted from the TRY global plant trait database, namely specific leaf area (SLA), leaf longevity (LL), leaf nitrogen content (N area ), the maximum carboxylation rate of Rubisco per leaf area (vcmaxarea), and wood density (WD). To scale the individual growth performance of trees, the leaf traits are linked by trade-offs based on the leaf economics spectrum, whereas wood density is linked to tree mortality. No preselection of growth strategies is taking place, because individuals with unique trait combinations are uniformly distributed at tree establishment. We validate the modeled trait distributions by empirical trait data and the modeled biomass by a remote sensing product along a climatic gradient. Including trait variability and trade-offs successfully predicts natural trait distributions and achieves a more realistic representation of functional diversity at the local to regional scale. As sites of high climatic variability, the fringes of the Amazon promote trait divergence and the coexistence of multiple tree growth strategies, while lower plant trait diversity is found in the species-rich center of the region with relatively low climatic variability. LPJmL-FIT enables to test hypotheses on the effects of functional biodiversity on ecosystem functioning and to apply the DGVM to current challenges in ecosystem management from local to global scales, that is, deforestation and climate change effects. © 2015 John Wiley & Sons Ltd.

  2. Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior

    NASA Astrophysics Data System (ADS)

    Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui

    2003-08-01

    In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.

  3. Empirical agreement in model validation.

    PubMed

    Jebeile, Julie; Barberousse, Anouk

    2016-04-01

    Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. On Detecting Influential Data and Selecting Regression Variables

    DTIC Science & Technology

    1989-10-01

    subset of the data. The empirical influence function for ,, IFA is defined to be IFA = AA -- A (2) For a given positive definite matrix M and a nonzero...interest. Cook and Weisberg (1980) tried to treat their measurement of the influence on the fitted values X. They used the empirical influence function for...Characterizations of an empirical influence function for detecting influential cases in regression. Technometrics 22, 495-508. [3] Gray, J. B. and Ling, R. F

  5. Making a georeferenced mosaic of historical map series using constrained polynomial fit

    NASA Astrophysics Data System (ADS)

    Molnár, G.

    2009-04-01

    Present day GIS software packages make it possible to handle several hundreds of rasterised map sheets. For proper usage of such datasets we usually have two requirements: First these map sheets should be georeferenced, secondly these georeferenced maps should fit properly together, without overlap and short. Both requirements can be fulfilled easily, if the geodetic background for the map series is accurate, and the projection of the map series is known. In this case the individual map sheets should be georeferenced in the projected coordinate system of the map series. This means every individual map sheets are georeferenced using overprinted coordinate grid or image corner projected coordinates as ground control points (GCPs). If after this georeferencing procedure the map sheets do not fit together (for example because of using different projection for every map sheet, as it is in the case of Third Military Survey) a common projection can be chosen, and all the georeferenced maps should be transformed to this common projection using a map-to-map transformation. If the geodetic background is not so strong, ie. there are distortions inside the map sheets, a polynomial (linear quadratic or cubic) polynomial fit can be used for georeferencing the map sheets. Finding identical surface objects (as GCPs) on the historical map and on a present day cartographic map, let us to determine a transformation between raw image coordinates (x,y) and the projected coordinates (Easting, Northing, E,N). This means, for all the map sheets, several GCPs should be found, (for linear, quadratic of cubic transformations at least 3, 5 or 10 respectively) and every map sheets should be transformed to a present day coordinate system individually using these GCPs. The disadvantage of this method is that, after the transformation, the individual transformed map sheets not necessarily fit together properly any more. To overcome this problem neither the reverse order of procedure helps: if we make the mosaic first (eg. graphically) and we try the polynomial fit of this mosaic afterwards, neither using this can we reduce the error of internal inaccuracy of the map-sheets. We can overcome this problem by calculating the transformation parameters of polynomial fit with constrains (Mikhail, 1976). The constrain is that the common edge of two neighboring map-sheets should be transformed identically, ie. the right edge of the left image and the left edge of the right image should fit together after the transformation. This condition should fulfill for all the internal (not only the vertical, but also for the horizontal) edges of the mosaic. Constrains are expressed as a relationship between parameters: The parameters of the polynomial transformation should fulfill not only the least squares adjustment criteria but also the constrain: the transformed coordinates should be identical on the image edges. (With the example mentioned above, for image points of the rightmost column of the left image the transformed coordinates should be the same a for the image points of the leftmost column of the right image, and these transformed coordinates can depend on the line number image coordinate of the raster point.) The normal equation system can be calculated with Lagrange-multipliers. The resulting set of parameters for all map-sheets should be applied on the transformation of the images. This parameter set can not been directly applied in GIS software for the transformation. The simplest solution applying this parameters is ‘simulating' GCPs for every image, and applying these simulated GCPs for the georeferencing of the individual map sheets. This method is applied on a set of map-sheets of the First military Survey of the Habsburg Empire with acceptable results. Reference: Mikhail, E. M.: Observations and Least Squares. IEP—A Dun-Donnelley Publisher, New York, 1976. 497 pp.

  6. The effects of time-varying observation errors on semi-empirical sea-level projections

    DOE PAGES

    Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.; ...

    2016-11-30

    Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less

  7. The effects of time-varying observation errors on semi-empirical sea-level projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.

    Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less

  8. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  9. A method for cone fitting based on certain sampling strategy in CMM metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Guo, Chaopeng

    2018-04-01

    A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.

  10. Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects

    NASA Astrophysics Data System (ADS)

    Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca

    2018-02-01

    Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.

  11. Empirical Green's function analysis: Taking the next step

    USGS Publications Warehouse

    Hough, S.E.

    1997-01-01

    An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.

  12. Broadband Ground Motion Synthesis of the 1999 Turkey Earthquakes Based On: 3-D Velocity Inversion, Finite Difference Calculations and Emprical Greens Functions

    NASA Astrophysics Data System (ADS)

    Gok, R.; Kalafat, D.; Hutchings, L.

    2003-12-01

    We analyze over 3,500 aftershocks recorded by several seismic networks during the 1999 Marmara, Turkey earthquakes. The analysis provides source parameters of the aftershocks, a three-dimensional velocity structure from tomographic inversion, an input three-dimensional velocity model for a finite difference wave propagation code (E3D, Larsen 1998), and records available for use as empirical Green's functions. Ultimately our goal is to model the 1999 earthquakes from DC to 25 Hz and study fault rupture mechanics and kinematic rupture models. We performed the simultaneous inversion for hypocenter locations and three-dimensional P- and S- wave velocity structure of Marmara Region using SIMULPS14 along with 2,500 events with more than eight P- readings and an azimuthal gap of less than 180\\deg. The resolution of calculated velocity structure is better in the eastern Marmara than the western Marmara region due to the dense ray coverage. We used the obtained velocity structure as input into the finite difference algorithm and validated the model by using M < 4 earthquakes as point sources and matching long period waveforms (f < 0.5 Hz). We also obtained Mo, fc and individual station kappa values for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquakes (M < 4.0) to obtain empirical Green's function (EGF) for the higher frequency range of ground motion synthesis (0.5 < f > 25 Hz). We additionally obtained the source scaling relation (energy-moment) of these aftershocks. We have generated several scenarios constrained by a priori knowledge of the Izmit and Duzce rupture parameters to validate our prediction capability.

  13. Empirical Modeling of Information Communication Technology Usage Behaviour among Business Education Teachers in Tertiary Colleges of a Developing Country

    ERIC Educational Resources Information Center

    Isiyaku, Dauda Dansarki; Ayub, Ahmad Fauzi Mohd; Abdulkadir, Suhaida

    2015-01-01

    This study has empirically tested the fitness of a structural model in explaining the influence of two exogenous variables (perceived enjoyment and attitude towards ICTs) on two endogenous variables (behavioural intention and teachers' Information Communication Technology (ICT) usage behavior), based on the proposition of Technology Acceptance…

  14. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data.

    PubMed

    Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel

    2012-01-01

    For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  15. Testing the Goodwin growth-cycle macroeconomic dynamics in Brazil

    NASA Astrophysics Data System (ADS)

    Moura, N. J.; Ribeiro, Marcelo B.

    2013-05-01

    This paper discusses the empirical validity of Goodwin’s (1967) macroeconomic model of growth with cycles by assuming that the individual income distribution of the Brazilian society is described by the Gompertz-Pareto distribution (GPD). This is formed by the combination of the Gompertz curve, representing the overwhelming majority of the population (˜99%), with the Pareto power law, representing the tiny richest part (˜1%). In line with Goodwin’s original model, we identify the Gompertzian part with the workers and the Paretian component with the class of capitalists. Since the GPD parameters are obtained for each year and the Goodwin macroeconomics is a time evolving model, we use previously determined, and further extended here, Brazilian GPD parameters, as well as unemployment data, to study the time evolution of these quantities in Brazil from 1981 to 2009 by means of the Goodwin dynamics. This is done in the original Goodwin model and an extension advanced by Desai et al. (2006). As far as Brazilian data is concerned, our results show partial qualitative and quantitative agreement with both models in the studied time period, although the original one provides better data fit. Nevertheless, both models fall short of a good empirical agreement as they predict single center cycles which were not found in the data. We discuss the specific points where the Goodwin dynamics must be improved in order to provide a more realistic representation of the dynamics of economic systems.

  16. The effect of seasonal birth pulses on pathogen persistence in wild mammal populations.

    PubMed

    Peel, A J; Pulliam, J R C; Luis, A D; Plowright, R K; O'Shea, T J; Hayman, D T S; Wood, J L N; Webb, C T; Restif, O

    2014-07-07

    The notion of a critical community size (CCS), or population size that is likely to result in long-term persistence of a communicable disease, has been developed based on the empirical observations of acute immunizing infections in human populations, and extended for use in wildlife populations. Seasonal birth pulses are frequently observed in wildlife and are expected to impact infection dynamics, yet their effect on pathogen persistence and CCS have not been considered. To investigate this issue theoretically, we use stochastic epidemiological models to ask how host life-history traits and infection parameters interact to determine pathogen persistence within a closed population. We fit seasonal birth pulse models to data from diverse mammalian species in order to identify realistic parameter ranges. When varying the synchrony of the birth pulse with all other parameters being constant, our model predicted that the CCS can vary by more than two orders of magnitude. Tighter birth pulses tended to drive pathogen extinction by creating large amplitude oscillations in prevalence, especially with high demographic turnover and short infectious periods. Parameters affecting the relative timing of the epidemic and birth pulse peaks determined the intensity and direction of the effect of pre-existing immunity in the population on the pathogen's ability to persist beyond the initial epidemic following its introduction.

  17. The effect of seasonal birth pulses on pathogen persistence in wild mammal populations

    PubMed Central

    Peel, A. J.; Pulliam, J. R. C.; Luis, A. D.; Plowright, R. K.; O'Shea, T. J.; Hayman, D. T. S.; Wood, J. L. N.; Webb, C. T.; Restif, O.

    2014-01-01

    The notion of a critical community size (CCS), or population size that is likely to result in long-term persistence of a communicable disease, has been developed based on the empirical observations of acute immunizing infections in human populations, and extended for use in wildlife populations. Seasonal birth pulses are frequently observed in wildlife and are expected to impact infection dynamics, yet their effect on pathogen persistence and CCS have not been considered. To investigate this issue theoretically, we use stochastic epidemiological models to ask how host life-history traits and infection parameters interact to determine pathogen persistence within a closed population. We fit seasonal birth pulse models to data from diverse mammalian species in order to identify realistic parameter ranges. When varying the synchrony of the birth pulse with all other parameters being constant, our model predicted that the CCS can vary by more than two orders of magnitude. Tighter birth pulses tended to drive pathogen extinction by creating large amplitude oscillations in prevalence, especially with high demographic turnover and short infectious periods. Parameters affecting the relative timing of the epidemic and birth pulse peaks determined the intensity and direction of the effect of pre-existing immunity in the population on the pathogen's ability to persist beyond the initial epidemic following its introduction. PMID:24827436

  18. Spectral analysis of early-type stars using a genetic algorithm based fitting method

    NASA Astrophysics Data System (ADS)

    Mokiem, M. R.; de Koter, A.; Puls, J.; Herrero, A.; Najarro, F.; Villamariz, M. R.

    2005-10-01

    We present the first automated fitting method for the quantitative spectroscopy of O- and early B-type stars with stellar winds. The method combines the non-LTE stellar atmosphere code fastwind from Puls et al. (2005, A&A, 435, 669) with the genetic algorithm based optimization routine pikaia from Charbonneau (1995, ApJS, 101, 309), allowing for a homogeneous analysis of upcoming large samples of early-type stars (e.g. Evans et al. 2005, A&A, 437, 467). In this first implementation we use continuum normalized optical hydrogen and helium lines to determine photospheric and wind parameters. We have assigned weights to these lines accounting for line blends with species not taken into account, lacking physics, and/or possible or potential problems in the model atmosphere code. We find the method to be robust, fast, and accurate. Using our method we analysed seven O-type stars in the young cluster Cyg OB2 and five other Galactic stars with high rotational velocities and/or low mass loss rates (including 10 Lac, ζ Oph, and τ Sco) that have been studied in detail with a previous version of fastwind. The fits are found to have a quality that is comparable or even better than produced by the classical “by eye” method. We define errorbars on the model parameters based on the maximum variations of these parameters in the models that cluster around the global optimum. Using this concept, for the investigated dataset we are able to recover mass-loss rates down to ~6 × 10-8~M⊙ yr-1 to within an error of a factor of two, ignoring possible systematic errors due to uncertainties in the continuum normalization. Comparison of our derived spectroscopic masses with those derived from stellar evolutionary models are in very good agreement, i.e. based on the limited sample that we have studied we do not find indications for a mass discrepancy. For three stars we find significantly higher surface gravities than previously reported. We identify this to be due to differences in the weighting of Balmer line wings between our automated method and “by eye” fitting and/or an improved multidimensional optimization of the parameters. The empirical modified wind momentum relation constructed on the basis of the stars analysed here agrees to within the error bars with the theoretical relation predicted by Vink et al. (2000, A&A, 362, 295), including those cases for which the winds are weak (i.e. less than a few times 10-7 M⊙ yr-1).

  19. Identifying a Superfluid Reynolds Number via Dynamical Similarity.

    PubMed

    Reeves, M T; Billam, T P; Anderson, B P; Bradley, A S

    2015-04-17

    The Reynolds number provides a characterization of the transition to turbulent flow, with wide application in classical fluid dynamics. Identifying such a parameter in superfluid systems is challenging due to their fundamentally inviscid nature. Performing a systematic study of superfluid cylinder wakes in two dimensions, we observe dynamical similarity of the frequency of vortex shedding by a cylindrical obstacle. The universality of the turbulent wake dynamics is revealed by expressing shedding frequencies in terms of an appropriately defined superfluid Reynolds number, Re(s), that accounts for the breakdown of superfluid flow through quantum vortex shedding. For large obstacles, the dimensionless shedding frequency exhibits a universal form that is well-fitted by a classical empirical relation. In this regime the transition to turbulence occurs at Re(s)≈0.7, irrespective of obstacle width.

  20. Measurement of positive direct current corona pulse in coaxial wire-cylinder gap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, Han, E-mail: hanyin1986@gmail.com; Zhang, Bo, E-mail: shizbcn@mail.tsinghua.edu.cn; He, Jinliang, E-mail: hejl@tsinghua.edu.cn

    In this paper, a system is designed and developed to measure the positive corona current in coaxial wire-cylinder gaps. The characteristic parameters of corona current pulses, such as the amplitude, rise time, half-wave time, and repetition frequency, are statistically analyzed and a new set of empirical formulas are derived by numerical fitting. The influence of space charges on corona currents is tested by using three corona cages with different radii. A numerical method is used to solve a simplified ion-flow model to explain the influence of space charges. Based on the statistical results, a stochastic model is developed to simulatemore » the corona pulse trains. And this model is verified by comparing the simulated frequency-domain responses with the measured ones.« less

  1. Linear free-energy relationships and the density functional theory: an analog of the hammett equation.

    PubMed

    Simón-Manso, Yamil

    2005-03-10

    Density functional theory has been applied to describe electronic substituent effects, especially in the pursuit of linear relationships similar to those observed from physical organic chemistry experiments. In particular, analogues for the Hammett equation parameters (sigma, rho) have been developed. Theoretical calculations were performed on several series of organic molecules in order to validate our model and for comparison with experimental results. The trends obtained by Hammett-like relations predicted by the model were found to be in qualitative agreement with the experimental data. The results obtained in this study suggest the applicability of similar correlation analysis based on theoretical methodologies that do not make use of empirical fits to experimental data can be useful in the study of substituent effects in organic chemistry.

  2. Fitting membrane resistance along with action potential shape in cardiac myocytes improves convergence: application of a multi-objective parallel genetic algorithm.

    PubMed

    Kaur, Jaspreet; Nygren, Anders; Vigmond, Edward J

    2014-01-01

    Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs) recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm). To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages) to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted) was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.

  3. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  4. A Comparison of Seyfert 1 and 2 Host Galaxies

    NASA Astrophysics Data System (ADS)

    De Robertis, M.; Virani, S.

    2000-12-01

    Wide-field, R-band CCD data of 15 Seyfert 1 and 15 Seyfert 2 galaxies taken from the CfA survey were analysed in order to compare the properties of their host galaxies. As well, B-band images for a subset of 12 Seyfert 1s and 7 Seyfert 2s were acquired and analysed in the same way. A robust technique for decomposing the three components---nucleus, bulge and disk---was developed in order determine the structural parameters for each galaxy. In effect, the nuclear contribution was removed empirically by using a spatially nearby, high signal-to-noise ratio point source as a template. Profile fits to the bulge+disk ignored data within three seeing disks of the nucleus. Of the many parameters that were compared between Seyfert 1s and 2s, only two distributions differed at greater than the 95% confidence level for the K-S test: the magnitude of the nuclear component, and the radial color gradient outside the nucleus. The former is expected. The latter could be consistent with some proposed evolutionary models. There is some suggestion that other parameters may differ, but at a lower confidence level.

  5. Characterization of the settling process for wastewater from a combined sewer system.

    PubMed

    Piro, P; Carbone, M; Penna, N; Marsalek, J

    2011-12-15

    Among the methods used for determining the parameters necessary for design of wastewater settling tanks, settling column tests are used most commonly, because of their simplicity and low costs. These tests partly mimic the actual settling processes and allow the evaluation of total suspended solids (TSS) removal by settling. Wastewater samples collected from the Liguori Channel (LC) catchment in Cosenza (Italy) were subject to settling column tests, which yielded iso-removal curves for both dry and wet-weather flow conditions. Such curves were approximated well by the newly proposed power law function containing two empirical parameters, a and b, the first of which is the particle settling velocity and the second one is a flocculation factor accounting for deviations from discrete particle settling. This power law function was tested for both the LC catchment and literature data and yielded a very good fit, with correlation coefficient values (R(2)) ranging from 0.93 to 0.99. Finally, variations in the settling tank TSS removal efficiencies with parameters a and b were also analyzed and provided insight for settling tank design. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Force-field parametrization and molecular dynamics simulations of Congo red

    NASA Astrophysics Data System (ADS)

    Król, Marcin; Borowski, Tomasz; Roterman, Irena; Piekarska, Barbara; Stopa, Barbara; Rybarska, Joanna; Konieczny, Leszek

    2004-01-01

    Congo red, a diazo dye widely used in medical diagnosis, is known to form supramolecular systems in solution. Such a supramolecular system may interact with various proteins. In order to examine the nature of such complexes empirical force field parameters for the Congo red molecule were developed. The parametrization of bonding terms closely followed the methodology used in the development of the charmm22 force field, except for the calculation of charges. Point charges were calculated from a fit to a quantum mechanically derived electrostatic potential using the CHELP-BOW method. Obtained parameters were tested in a series of molecular dynamics simulations of both a single molecule and a micelle composed of Congo red molecules. It is shown that newly developed parameters define a stable minimum on the hypersurface of the potential energy and crystal and ab initio geometries and rotational barriers are well reproduced. Furthermore, rotations around C-N bonds are similar to torsional vibrations observed in crystals of diphenyl-diazene, which confirms that the flexibility of the molecule is correct. Comparison of results obtained from micelles molecular dynamics simulations with experimental data shows that the thermal dependence of micelle creation is well reproduced.

  7. The location-, word-, and arrow-based Simon effects: An ex-Gaussian analysis.

    PubMed

    Luo, Chunming; Proctor, Robert W

    2018-04-01

    Task-irrelevant spatial information, conveyed by stimulus location, location word, or arrow direction, can influence the response to task-relevant attributes, generating the location-, word-, and arrow-based Simon effects. We examined whether different mechanisms are involved in the generation of these Simon effects by fitting a mathematical ex-Gaussian function to empirical response time (RT) distributions. Specifically, we tested whether which ex-Gaussian parameters (μ, σ, and τ) show Simon effects and whether the location-, word, and arrow-based effects are on different parameters. Results show that the location-based Simon effect occurred on mean RT and μ but not on τ, and a reverse Simon effect occurred on σ. In contrast, a positive word-based Simon effect was obtained on all these measures (including σ), and a positive arrow-based Simon effect was evident on mean RT, σ, and τ but not μ. The arrow-based Simon effect was not different from the word-based Simon effect on τ or σ but was on μ and mean RT. These distinct results on mean RT and ex-Gaussian parameters provide evidence that spatial information conveyed by the various location modes are different in the time-course of activation.

  8. Global parameter optimization of a Mather-type plasma focus in the framework of the Gratton-Vargas two-dimensional snowplow model

    NASA Astrophysics Data System (ADS)

    Auluck, S. K. H.

    2014-12-01

    Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton-Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance.

  9. Incorporating measurement error in n = 1 psychological autoregressive modeling

    PubMed Central

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  10. Modeling the near-ultraviolet band of GK stars. III. Dependence on abundance pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Short, C. Ian; Campbell, Eamonn A., E-mail: ishort@ap.smu.ca

    2013-06-01

    We extend the grid of non-LTE (NLTE) models presented in Paper II to explore variations in abundance pattern in two ways: (1) the adoption of the Asplund et al. (GASS10) abundances, (2) for stars of metallicity, [M/H], of –0.5, the adoption of a non-solar enhancement of α-elements by +0.3 dex. Moreover, our grid of synthetic spectral energy distributions (SEDs) is interpolated to a finer numerical resolution in both T {sub eff} (ΔT {sub eff} = 25 K) and log g (Δlog g = 0.25). We compare the values of T {sub eff} and log g inferred from fitting LTE andmore » NLTE SEDs to observed SEDs throughout the entire visible band, and in an ad hoc 'blue' band. We compare our spectrophotometrically derived T {sub eff} values to a variety of T {sub eff} calibrations, including more empirical ones, drawn from the literature. For stars of solar metallicity, we find that the adoption of the GASS10 abundances lowers the inferred T {sub eff} value by 25-50 K for late-type giants, and NLTE models computed with the GASS10 abundances give T {sub eff} results that are marginally in better agreement with other T {sub eff} calibrations. For stars of [M/H] = –0.5 there is marginal evidence that adoption of α-enhancement further lowers the derived T {sub eff} value by 50 K. Stellar parameters inferred from fitting NLTE models to SEDs are more dependent than LTE models on the wavelength region being fitted, and we find that the effect depends on how heavily line blanketed the fitting region is, whether the fitting region is to the blue of the Wien peak of the star's SED, or both.« less

  11. Potential fitting biases resulting from grouping data into variable width bins

    NASA Astrophysics Data System (ADS)

    Towers, S.

    2014-07-01

    When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible.

  12. A Comparison of Full and Empirical Bayes Techniques for Inferring Sea Level Changes from Tide Gauge Records

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Tingley, M.

    2016-12-01

    Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.

  13. Fisher's geometrical model emerges as a property of complex integrated phenotypic networks.

    PubMed

    Martin, Guillaume

    2014-05-01

    Models relating phenotype space to fitness (phenotype-fitness landscapes) have seen important developments recently. They can roughly be divided into mechanistic models (e.g., metabolic networks) and more heuristic models like Fisher's geometrical model. Each has its own drawbacks, but both yield testable predictions on how the context (genomic background or environment) affects the distribution of mutation effects on fitness and thus adaptation. Both have received some empirical validation. This article aims at bridging the gap between these approaches. A derivation of the Fisher model "from first principles" is proposed, where the basic assumptions emerge from a more general model, inspired by mechanistic networks. I start from a general phenotypic network relating unspecified phenotypic traits and fitness. A limited set of qualitative assumptions is then imposed, mostly corresponding to known features of phenotypic networks: a large set of traits is pleiotropically affected by mutations and determines a much smaller set of traits under optimizing selection. Otherwise, the model remains fairly general regarding the phenotypic processes involved or the distribution of mutation effects affecting the network. A statistical treatment and a local approximation close to a fitness optimum yield a landscape that is effectively the isotropic Fisher model or its extension with a single dominant phenotypic direction. The fit of the resulting alternative distributions is illustrated in an empirical data set. These results bear implications on the validity of Fisher's model's assumptions and on which features of mutation fitness effects may vary (or not) across genomic or environmental contexts.

  14. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    NASA Astrophysics Data System (ADS)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  15. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  16. Constraining Earthquake Source Parameters in Rupture Patches and Rupture Barriers on Gofar Transform Fault, East Pacific Rise from Ocean Bottom Seismic Data

    NASA Astrophysics Data System (ADS)

    Moyer, P. A.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.

    2015-12-01

    On Gofar transform fault on the East Pacific Rise (EPR), Mw ~6.0 earthquakes occur every ~5 years and repeatedly rupture the same asperity (rupture patch), while the intervening fault segments (rupture barriers to the largest events) only produce small earthquakes. In 2008, an ocean bottom seismometer (OBS) deployment successfully captured the end of a seismic cycle, including an extensive foreshock sequence localized within a 10 km rupture barrier, the Mw 6.0 mainshock and its aftershocks that occurred in a ~10 km rupture patch, and an earthquake swarm located in a second rupture barrier. Here we investigate whether the inferred variations in frictional behavior along strike affect the rupture processes of 3.0 < M < 4.5 earthquakes by determining source parameters for 100 earthquakes recorded during the OBS deployment.Using waveforms with a 50 Hz sample rate from OBS accelerometers, we calculate stress drop using an omega-squared source model, where the weighted average corner frequency is derived from an empirical Green's function (EGF) method. We obtain seismic moment by fitting the omega-squared source model to the low frequency amplitude of individual spectra and account for attenuation using Q obtained from a velocity model through the foreshock zone. To ensure well-constrained corner frequencies, we require that the Brune [1970] model provides a statistically better fit to each spectral ratio than a linear model and that the variance is low between the data and model. To further ensure that the fit to the corner frequency is not influenced by resonance of the OBSs, we require a low variance close to the modeled corner frequency. Error bars on corner frequency were obtained through a grid search method where variance is within 10% of the best-fit value. Without imposing restrictive selection criteria, slight variations in corner frequencies from rupture patches and rupture barriers are not discernable. Using well-constrained source parameters, we find an average stress drop of 5.7 MPa in the aftershock zone compared to values of 2.4 and 2.9 MPa in the foreshock and swarm zones respectively. The higher stress drops in the rupture patch compared to the rupture barriers reflect systematic differences in along strike fault zone properties on Gofar transform fault.

  17. Parameter estimation and forecasting for multiplicative log-normal cascades.

    PubMed

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  18. Agent-Based Model with Asymmetric Trading and Herding for Complex Financial Systems

    PubMed Central

    Chen, Jun-Jie; Zheng, Bo; Tan, Lei

    2013-01-01

    Background For complex financial systems, the negative and positive return-volatility correlations, i.e., the so-called leverage and anti-leverage effects, are particularly important for the understanding of the price dynamics. However, the microscopic origination of the leverage and anti-leverage effects is still not understood, and how to produce these effects in agent-based modeling remains open. On the other hand, in constructing microscopic models, it is a promising conception to determine model parameters from empirical data rather than from statistical fitting of the results. Methods To study the microscopic origination of the return-volatility correlation in financial systems, we take into account the individual and collective behaviors of investors in real markets, and construct an agent-based model. The agents are linked with each other and trade in groups, and particularly, two novel microscopic mechanisms, i.e., investors’ asymmetric trading and herding in bull and bear markets, are introduced. Further, we propose effective methods to determine the key parameters in our model from historical market data. Results With the model parameters determined for six representative stock-market indices in the world, respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. At the same time, our model produces other features of the real markets, such as the fat-tail distribution of returns and the long-term correlation of volatilities. Conclusions We reveal that for the leverage and anti-leverage effects, both the investors’ asymmetric trading and herding are essential generation mechanisms. Among the six markets, however, the investors’ trading is approximately symmetric for the five markets which exhibit the leverage effect, thus contributing very little. These two microscopic mechanisms and the methods for the determination of the key parameters can be applied to other complex systems with similar asymmetries. PMID:24278146

  19. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings.

    PubMed

    Bioglio, Livio; Génois, Mathieu; Vestergaard, Christian L; Poletto, Chiara; Barrat, Alain; Colizza, Vittoria

    2016-11-14

    The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and realism of agent-based simulations and limit the intrinsic biases of the homogeneous mixing.

  20. Agent-based model with asymmetric trading and herding for complex financial systems.

    PubMed

    Chen, Jun-Jie; Zheng, Bo; Tan, Lei

    2013-01-01

    For complex financial systems, the negative and positive return-volatility correlations, i.e., the so-called leverage and anti-leverage effects, are particularly important for the understanding of the price dynamics. However, the microscopic origination of the leverage and anti-leverage effects is still not understood, and how to produce these effects in agent-based modeling remains open. On the other hand, in constructing microscopic models, it is a promising conception to determine model parameters from empirical data rather than from statistical fitting of the results. To study the microscopic origination of the return-volatility correlation in financial systems, we take into account the individual and collective behaviors of investors in real markets, and construct an agent-based model. The agents are linked with each other and trade in groups, and particularly, two novel microscopic mechanisms, i.e., investors' asymmetric trading and herding in bull and bear markets, are introduced. Further, we propose effective methods to determine the key parameters in our model from historical market data. With the model parameters determined for six representative stock-market indices in the world, respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. At the same time, our model produces other features of the real markets, such as the fat-tail distribution of returns and the long-term correlation of volatilities. We reveal that for the leverage and anti-leverage effects, both the investors' asymmetric trading and herding are essential generation mechanisms. Among the six markets, however, the investors' trading is approximately symmetric for the five markets which exhibit the leverage effect, thus contributing very little. These two microscopic mechanisms and the methods for the determination of the key parameters can be applied to other complex systems with similar asymmetries.

  1. Confirmatory factor analysis of the Child Oral Health Impact Profile (Korean version).

    PubMed

    Cho, Young Il; Lee, Soonmook; Patton, Lauren L; Kim, Hae-Young

    2016-04-01

    Empirical support for the factor structure of the Child Oral Health Impact Profile (COHIP) has not been fully established. The purposes of this study were to evaluate the factor structure of the Korean version of the COHIP (COHIP-K) empirically using confirmatory factor analysis (CFA) based on the theoretical framework and then to assess whether any of the factors in the structure could be grouped into a simpler single second-order factor. Data were collected through self-reported COHIP-K responses from a representative community sample of 2,236 Korean children, 8-15 yr of age. Because a large inter-factor correlation of 0.92 was estimated in the original five-factor structure, the two strongly correlated factors were combined into one factor, resulting in a four-factor structure. The revised four-factor model showed a reasonable fit with appropriate inter-factor correlations. Additionally, the second-order model with four sub-factors was reasonable with sufficient fit and showed equal fit to the revised four-factor model. A cross-validation procedure confirmed the appropriateness of the findings. Our analysis empirically supported a four-factor structure of COHIP-K, a summarized second-order model, and the use of an integrated summary COHIP score. © 2016 Eur J Oral Sci.

  2. Equivalence between Step Selection Functions and Biased Correlated Random Walks for Statistical Inference on Animal Movement.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul

    2015-01-01

    Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.

  3. Financial market dynamics: superdiffusive or not?

    NASA Astrophysics Data System (ADS)

    Devi, Sandhya

    2017-08-01

    The behavior of stock market returns over a period of 1-60 d has been investigated for S&P 500 and Nasdaq within the framework of nonextensive Tsallis statistics. Even for such long terms, the distributions of the returns are non-Gaussian. They have fat tails indicating that the stock returns do not follow a random walk model. In this work, a good fit to a Tsallis q-Gaussian distribution is obtained for the distributions of all the returns using the method of Maximum Likelihood Estimate. For all the regions of data considered, the values of the scaling parameter q, estimated from 1 d returns, lie in the range 1.4-1.65. The estimated inverse mean square deviations (beta) show a power law behavior in time with exponent values between  -0.91 and  -1.1 indicating normal to mildly subdiffusive behavior. Quite often, the dynamics of market return distributions is modelled by a Fokker-Plank (FP) equation either with a linear drift and a nonlinear diffusion term or with just a nonlinear diffusion term. Both of these cases support a q-Gaussian distribution as a solution. The distributions obtained from current estimated parameters are compared with the solutions of the FP equations. For negligible drift term, the inverse mean square deviations (betaFP) from the FP model follow a power law with exponent values between  -1.25 and  -1.48 indicating superdiffusion. When the drift term is non-negligible, the corresponding betaFP do not follow a power law and become stationary after certain characteristic times that depend on the values of the drift parameter and q. Neither of these behaviors is supported by the results of the empirical fit.

  4. Microwave and hot air drying of garlic puree: drying kinetics and quality characteristics

    NASA Astrophysics Data System (ADS)

    İlter, Işıl; Akyıl, Saniye; Devseren, Esra; Okut, Dilara; Koç, Mehmet; Kaymak Ertekin, Figen

    2018-02-01

    In this study, the effect of hot air and microwave drying on drying kinetics and some quality characteristics such as water activity, color, optic index and volatile oil of garlic puree was investigated. Optic index representing browning of the garlic puree increased excessively with an increase in microwave power and hot air drying temperature. However, volatile oil content of the dried samples was decreased by increasing of temperature and microwave power. By increasing drying temperature (50, 60 and 70 °C) and microwave power (180, 360 and 540 W), the drying time decreased from 8.5 h to 4 min. In order to determine the kinetic parameters, the experimental drying data were fitted to various semi-empirical models beside 2nd Fick's diffusion equation. Among them, the Page model gave a better fit for microwave-drying, while Logarithmic model gave a better fit for hot air drying. By increasing the microwave power and hot air drying temperature, the effective moisture diffusivity, De values ranged from 0.76×10-8 to 2.85×10-8 m2/s and from 2.21×10-10 to 3.07×10-10 m2/s, respectively. The activation energy was calculated as 20.90 kJ/mol for hot air drying and 21.96 W/g for microwave drying using an Arrhenius type equation.

  5. Development of Xe and Kr empirical potentials for CeO 2, ThO 2, UO 2 and PuO 2, combining DFT with high temperature MD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, M. W. D.; Kuganathan, N.; Burr, P. A.

    In this study, the development of embedded atom method (EAM) many-body potentials for actinide oxides and associated mixed oxide (MOX) systems has motivated the development of a complementary parameter set for gas-actinide and gas-oxygen interactions. A comprehensive set of density functional theory (DFT) calculations were used to study Xe and Kr incorporation at a number of sites in CeO 2, ThO 2, UO 2 and PuO 2. These structures were used to fit a potential, which was used to generate molecular dynamics (MD) configurations incorporating Xe and Kr at 300 K, 1500 K, 3000 K and 5000 K. Subsequent matchingmore » to the forces predicted by DFT for these MD configurations was used to refine the potential set. This fitting approach ensured weighted fitting to configurations that are thermodynamically significant over a broad temperature range, while avoiding computationally expensive DFT-MD calculations. The resultant gas potentials were validated against DFT trapping energies and are suitable for simulating combinations of Xe and Kr in solid solutions of CeO 2, ThO 2, UO 2 and PuO 2, providing a powerful tool for the atomistic simulation of conventional nuclear reactor fuel UO 2 as well as advanced MOX fuels.« less

  6. Development of Xe and Kr empirical potentials for CeO 2, ThO 2, UO 2 and PuO 2, combining DFT with high temperature MD

    DOE PAGES

    Cooper, M. W. D.; Kuganathan, N.; Burr, P. A.; ...

    2016-08-23

    In this study, the development of embedded atom method (EAM) many-body potentials for actinide oxides and associated mixed oxide (MOX) systems has motivated the development of a complementary parameter set for gas-actinide and gas-oxygen interactions. A comprehensive set of density functional theory (DFT) calculations were used to study Xe and Kr incorporation at a number of sites in CeO 2, ThO 2, UO 2 and PuO 2. These structures were used to fit a potential, which was used to generate molecular dynamics (MD) configurations incorporating Xe and Kr at 300 K, 1500 K, 3000 K and 5000 K. Subsequent matchingmore » to the forces predicted by DFT for these MD configurations was used to refine the potential set. This fitting approach ensured weighted fitting to configurations that are thermodynamically significant over a broad temperature range, while avoiding computationally expensive DFT-MD calculations. The resultant gas potentials were validated against DFT trapping energies and are suitable for simulating combinations of Xe and Kr in solid solutions of CeO 2, ThO 2, UO 2 and PuO 2, providing a powerful tool for the atomistic simulation of conventional nuclear reactor fuel UO 2 as well as advanced MOX fuels.« less

  7. A cooperative approach among methods for photometric redshifts estimation: an application to KiDS data

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Tortora, C.; Brescia, M.; Longo, G.; Radovich, M.; Napolitano, N. R.; Amaro, V.; Vellucci, C.; La Barbera, F.; Getman, F.; Grado, A.

    2017-04-01

    Photometric redshifts (photo-z) are fundamental in galaxy surveys to address different topics, from gravitational lensing and dark matter distribution to galaxy evolution. The Kilo Degree Survey (KiDS), I.e. the European Southern Observatory (ESO) public survey on the VLT Survey Telescope (VST), provides the unprecedented opportunity to exploit a large galaxy data set with an exceptional image quality and depth in the optical wavebands. Using a KiDS subset of about 25000 galaxies with measured spectroscopic redshifts, we have derived photo-z using (I) three different empirical methods based on supervised machine learning; (II) the Bayesian photometric redshift model (or BPZ); and (III) a classical spectral energy distribution (SED) template fitting procedure (LE PHARE). We confirm that, in the regions of the photometric parameter space properly sampled by the spectroscopic templates, machine learning methods provide better redshift estimates, with a lower scatter and a smaller fraction of outliers. SED fitting techniques, however, provide useful information on the galaxy spectral type, which can be effectively used to constrain systematic errors and to better characterize potential catastrophic outliers. Such classification is then used to specialize the training of regression machine learning models, by demonstrating that a hybrid approach, involving SED fitting and machine learning in a single collaborative framework, can be effectively used to improve the accuracy of photo-z estimates.

  8. Women Learning To Become Managers: Learning To Fit in or To Play a Different Game?

    ERIC Educational Resources Information Center

    Bryans, Patricia; Mavin, Sharon

    2003-01-01

    Explores women's experiences of learning to become managers. Discusses empirical data resulting from a questionnaire and subsequent thematic group discussion with average women managers. Highlights the importance to women managers of learning from and with others. Focuses on the contradiction women managers face, that of whether to learn to fit in…

  9. Effects of an Educational Gymnastics Course on the Motor Skills and Health-Related Fitness Components of PETE Students

    ERIC Educational Resources Information Center

    Webster, Liana

    2017-01-01

    Many physical education teacher education (PETE) programs seek to develop teacher candidates' content knowledge through various physical activity courses. However, limited empirical evidence exists linking college physical activity courses to the development of skill or fitness. The purpose of the study was to examine the effects of an educational…

  10. Exponential Correlation of IQ and the Wealth of Nations

    ERIC Educational Resources Information Center

    Dickerson, Richard E.

    2006-01-01

    Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…

  11. The fit between health impact assessment and public policy: practice meets theory.

    PubMed

    Harris, Patrick; Sainsbury, Peter; Kemp, Lynn

    2014-05-01

    The last decade has seen increased use of health impact assessment (HIA) to influence public policies developed outside the Health sector. HIA has developed as a structured, linear and technical process to incorporate health, broadly defined, into policy. This is potentially incongruent with complex, non-linear and tactical policy making which does not necessarily consider health. HIA research has however not incorporated existing public policy theory to explain practitioners' experiences with HIA and policy. This research, therefore, used public policy theory to explain HIA practitioners' experiences and investigate 'What is the fit between HIA and public policy?' Empirical findings from nine in-depth interviews with international HIA practitioners were re-analysed against public policy theory. We reviewed the HIA literature for inclusion of public policy theories then compared these for compatibility with our critical realist methodology and the empirical data. The theory 'Policy Cycles and Subsystems' (Howlett et al., 2009) was used to re-analyse the empirical data. HIAs for policy are necessarily both tactical and technical. Within policy subsystems using HIA to influence public policy requires tactically positioning health as a relevant public policy issue and, to facilitate this, institutional support for collaboration between Public Health and other sectors. HIA fits best within the often non-linear public policy cycle as a policy formulation instrument. HIA provides, tactically and technically, a space for practical reasoning to navigate facts, values and processes underlying the substantive and procedural dimensions of policy. Re-analysing empirical experiential data using existing public policy theory provided valuable explanations for future research, policy and practice concerning why and how HIA fits tactically and technically with the world of public policy development. The use of theory and empiricism opens up important possibilities for future research in the search for better explanations of complex practical problems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.

    PubMed

    Zhao, Yuchao; Frey, H Christopher

    2004-11-01

    Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.

  13. Viscoelastic shear properties of human vocal fold mucosa: theoretical characterization based on constitutive modeling.

    PubMed

    Chan, R W; Titze, I R

    2000-01-01

    The viscoelastic shear properties of human vocal fold mucosa (cover) were previously measured as a function of frequency [Chan and Titze, J. Acoust. Soc. Am. 106, 2008-2021 (1999)], but data were obtained only in a frequency range of 0.01-15 Hz, an order of magnitude below typical frequencies of vocal fold oscillation (on the order of 100 Hz). This study represents an attempt to extrapolate the data to higher frequencies based on two viscoelastic theories, (1) a quasilinear viscoelastic theory widely used for the constitutive modeling of the viscoelastic properties of biological tissues [Fung, Biomechanics (Springer-Verlag, New York, 1993), pp. 277-292], and (2) a molecular (statistical network) theory commonly used for the rheological modeling of polymeric materials [Zhu et al., J. Biomech. 24, 1007-1018 (1991)]. Analytical expressions of elastic and viscous shear moduli, dynamic viscosity, and damping ratio based on the two theories with specific model parameters were applied to curve-fit the empirical data. Results showed that the theoretical predictions matched the empirical data reasonably well, allowing for parametric descriptions of the data and their extrapolations to frequencies of phonation.

  14. Practical guidance on representing the heteroscedasticity of residual errors of hydrological predictions

    NASA Astrophysics Data System (ADS)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George

    2016-04-01

    Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).

  15. A generalized population dynamics model for reproductive interference with absolute density dependence.

    PubMed

    Kyogoku, Daisuke; Sota, Teiji

    2017-05-17

    Interspecific mating interactions, or reproductive interference, can affect population dynamics, species distribution and abundance. Previous population dynamics models have assumed that the impact of frequency-dependent reproductive interference depends on the relative abundances of species. However, this assumption could be an oversimplification inappropriate for making quantitative predictions. Therefore, a more general model to forecast population dynamics in the presence of reproductive interference is required. Here we developed a population dynamics model to describe the absolute density dependence of reproductive interference, which appears likely when encounter rate between individuals is important. Our model (i) can produce diverse shapes of isoclines depending on parameter values and (ii) predicts weaker reproductive interference when absolute density is low. These novel characteristics can create conditions where coexistence is stable and independent from the initial conditions. We assessed the utility of our model in an empirical study using an experimental pair of seed beetle species, Callosobruchus maculatus and Callosobruchus chinensis. Reproductive interference became stronger with increasing total beetle density even when the frequencies of the two species were kept constant. Our model described the effects of absolute density and showed a better fit to the empirical data than the existing model overall.

  16. Poroviscoelastic cartilage properties in the mouse from indentation.

    PubMed

    Chiravarambath, Sidharth; Simha, Narendra K; Namani, Ravi; Lewis, Jack L

    2009-01-01

    A method for fitting parameters in a poroviscoelastic (PVE) model of articular cartilage in the mouse is presented. Indentation is performed using two different sized indenters and then these data are fitted using a PVE finite element program and parameter extraction algorithm. Data from a smaller indenter, a 15 mum diameter flat-ended 60 deg cone, is first used to fit the viscoelastic (VE) parameters, on the basis that for this tip size the gel diffusion time (approximate time constant of the poroelastic (PE) response) is of the order of 0.1 s, so that the PE response is negligible. These parameters are then used to fit the data from a second 170 mum diameter flat-ended 60 deg cone for the PE parameters, using the VE parameters extracted from the data from the 15 mum tip. Data from tests on five different mouse tibial plateaus are presented and fitted. Parameter variation studies for the larger indenter show that for this case the VE and PE time responses overlap in time, necessitating the use of both models.

  17. Alternative Approaches to Evaluation in Empirical Microeconomics

    ERIC Educational Resources Information Center

    Blundell, Richard; Dias, Monica Costa

    2009-01-01

    This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…

  18. Comments on Different techniques for finding best-fit parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenimore, Edward E.; Triplett, Laurie A.

    2014-07-01

    A common data analysis problem is to find best-fit parameters through chi-square minimization. Levenberg-Marquardt is an often used system that depends on gradients and converges when successive iterations do not change chi-square more than a specified amount. We point out in cases where the sought-after parameter weakly affects the fit and cases where the overall scale factor is a parameter, that a Golden Search technique can often do better. The Golden Search converges when the best-fit point is within a specified range and that range can be made arbitrarily small. It does not depend on the value of chi-square.

  19. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  20. Evolution in population parameters: density-dependent selection or density-dependent fitness?

    PubMed

    Travis, Joseph; Leips, Jeff; Rodd, F Helen

    2013-05-01

    Density-dependent selection is one of earliest topics of joint interest to both ecologists and evolutionary biologists and thus occupies an important position in the histories of these disciplines. This joint interest is driven by the fact that density-dependent selection is the simplest form of feedback between an ecological effect of an organism's own making (crowding due to sustained population growth) and the selective response to the resulting conditions. This makes density-dependent selection perhaps the simplest process through which we see the full reciprocity between ecology and evolution. In this article, we begin by tracing the history of studying the reciprocity between ecology and evolution, which we see as combining the questions of evolutionary ecology with the assumptions and approaches of ecological genetics. In particular, density-dependent fitness and density-dependent selection were critical concepts underlying ideas about adaptation to biotic selection pressures and the coadaptation of interacting species. However, theory points to a critical distinction between density-dependent fitness and density-dependent selection in their influences on complex evolutionary and ecological interactions among coexisting species. Although density-dependent fitness is manifestly evident in empirical studies, evidence of density-dependent selection is much less common. This leads to the larger question of how prevalent and important density-dependent selection might really be. Life-history variation in the least killifish Heterandria formosa appears to reflect the action of density-dependent selection, and yet compelling evidence is elusive, even in this well-studied system, which suggests some important challenges for understanding density-driven feedbacks between ecology and evolution.

  1. Annual Rainfall Maxima: Theoretical Estimation of the GEV Shape Parameter k Using Multifractal Models

    NASA Astrophysics Data System (ADS)

    Veneziano, D.; Langousis, A.; Lepore, C.

    2009-12-01

    The annual maximum of the average rainfall intensity in a period of duration d, Iyear(d), is typically assumed to have generalized extreme value (GEV) distribution. The shape parameter k of that distribution is especially difficult to estimate from either at-site or regional data, making it important to constraint k using theoretical arguments. In the context of multifractal representations of rainfall, we observe that standard theoretical estimates of k from extreme value (EV) and extreme excess (EE) theories do not apply, while estimates from large deviation (LD) theory hold only for very small d. We then propose a new theoretical estimator based on fitting GEV models to the numerically calculated distribution of Iyear(d). A standard result from EV and EE theories is that k depends on the tail behavior of the average rainfall in d, I(d). This result holds if Iyear(d) is the maximum of a sufficiently large number n of variables, all distributed like I(d); therefore its applicability hinges on whether n = 1yr/d is large enough and the tail of I(d) is sufficiently well known. One typically assumes that at least for small d the former condition is met, but poor knowledge of the upper tail of I(d) remains an obstacle for all d. In fact, in the case of multifractal rainfall, also the first condition is not met because, irrespective of d, 1yr/d is too small (Veneziano et al., 2009, WRR, in press). Applying large deviation (LD) theory to this multifractal case, we find that, as d → 0, Iyear(d) approaches a GEV distribution whose shape parameter kLD depends on a region of the distribution of I(d) well below the upper tail, is always positive (in the EV2 range), is much larger than the value predicted by EV and EE theories, and can be readily found from the scaling properties of I(d). The scaling properties of rainfall can be inferred also from short records, but the limitation remains that the result holds under d → 0 not for finite d. Therefore, for different reasons, none of the above asymptotic theories applies to Iyear(d). In practice, one is interested in the distribution of Iyear(d) over a finite range of averaging durations d and return periods T. Using multifractal representations of rainfall, we have numerically calculated the distribution of Iyear(d) and found that, although not GEV, the distribution can be accurately approximated by a GEV model. The best-fitting parameter k depends on d, but is insensitive to the scaling properties of rainfall and the range of return periods T used for fitting. We have obtained a default expression for k(d) and compared it with estimates from historical rainfall records. The theoretical function tracks well the empirical dependence on d, although it generally overestimates the empirical k values, possibly due to deviations of rainfall from perfect scaling. This issue is under investigation.

  2. An Empirical Spectroscopic Database for Acetylene in the Regions of 5850-9415 CM^{-1}

    NASA Astrophysics Data System (ADS)

    Campargue, Alain; Lyulin, Oleg

    2017-06-01

    Six studies have been recently devoted to a systematic analysis of the high-resolution near infrared absorption spectrum of acetylene recorded by Cavity Ring Down spectroscopy (CRDS) in Grenoble and by Fourier-transform spectroscopy (FTS) in Brussels and Hefei. On the basis of these works, in the present contribution, we construct an empirical database for acetylene in the 5850 - 9415 \\wn region excluding the 6341-7000 \\wn interval corresponding to the very strong νb{1}+ νb{3} manifold. The database gathers and extends information included in our CRDS and FTS studies. In particular, the intensities of about 1700 lines measured by CRDS in the 7244-7920 \\wn are reported for the first time together with those of several bands of ^{12}C^{13}CH_{2} present in natural isotopic abundance in the acetylene sample. The Herman-Wallis coefficients of most of the bands are derived from a fit of the measured intensity values. A recommended line list is provided with positions calculated using empirical spectroscopic parameters of the lower and upper energy vibrational levels and intensities calculated using the derived Herman-Wallis coefficients. This approach allows completing the experimental list by adding missing lines and improving poorly determined positions and intensities. As a result the constructed line list includes a total of 10973 lines belonging to 146 bands of ^{12}C_{2}H_{2} and 29 bands of ^{12}C^{13}CH_{2}. For comparison the HITRAN2012 database in the same region includes 869 lines of 14 bands, all belonging to ^{12}C_{2}H_{2}. Our weakest lines have an intensity on the order of 10^{-29} cm/molecule,about three orders of magnitude smaller than the HITRAN intensity cut off. Line profile parameters are added to the line list which is provided in HITRAN format. The comparison to the HITRAN2012 line list or to results obtained using the global effective operator approach is discussed in terms of completeness and accuracy.

  3. STAR COUNT DENSITY PROFILES AND STRUCTURAL PARAMETERS OF 26 GALACTIC GLOBULAR CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miocchi, P.; Lanzoni, B.; Ferraro, F. R.

    We used an appropriate combination of high-resolution Hubble Space Telescope observations and wide-field, ground-based data to derive the radial stellar density profiles of 26 Galactic globular clusters from resolved star counts (which can be all freely downloaded on-line). With respect to surface brightness (SB) profiles (which can be biased by the presence of sparse, bright stars), star counts are considered to be the most robust and reliable tool to derive cluster structural parameters. For each system, a detailed comparison with both King and Wilson models has been performed and the most relevant best-fit parameters have been obtained. This collection ofmore » data represents the largest homogeneous catalog collected so far of star count profiles and structural parameters derived therefrom. The analysis of the data of our catalog has shown that (1) the presence of the central cusps previously detected in the SB profiles of NGC 1851, M13, and M62 is not confirmed; (2) the majority of clusters in our sample are fit equally well by the King and the Wilson models; (3) we confirm the known relationship between cluster size (as measured by the effective radius) and galactocentric distance; (4) the ratio between the core and the effective radii shows a bimodal distribution, with a peak at {approx}0.3 for about 80% of the clusters and a secondary peak at {approx}0.6 for the remaining 20%. Interestingly, the main peak turns out to be in agreement with that expected from simulations of cluster dynamical evolution and the ratio between these two radii correlates well with an empirical dynamical-age indicator recently defined from the observed shape of blue straggler star radial distribution, thus suggesting that no exotic mechanisms of energy generation are needed in the cores of the analyzed clusters.« less

  4. Prediction-error variance in Bayesian model updating: a comparative study

    NASA Astrophysics Data System (ADS)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.

  5. On the shape of martian dust and water ice aerosols

    NASA Astrophysics Data System (ADS)

    Pitman, K. M.; Wolff, M. J.; Clancy, R. T.; Clayton, G. C.

    2000-10-01

    Researchers have often calculated radiative properties of Martian aerosols using either Mie theory for homogeneous spheres or semi-empirical theories. Given that these atmospheric particles are randomly oriented, this approach seems fairly reasonable. However, the idea that randomly oriented nonspherical particles have scattering properties equivalent to even a select subset of spheres is demonstratably false} (Bohren and Huffman 1983; Bohren and Koh 1985, Appl. Optics, 24, 1023). Fortunately, recent computational developments now enable us to directly compute scattering properties for nonspherical particles. We have combined a numerical approach for axisymmetric particle shapes, i.e., cylinders, disks, spheroids (Waterman's T-Matrix approach as improved by Mishchenko and collaborators; cf., Mishchenko et al. 1997, JGR, 102, D14, 16,831), with a multiple-scattering radiative transfer algorithm to constrain the shape of water ice and dust aerosols. We utilize a two-stage iterative process. First, we empirically derive a scattering phase function for each aerosol component (starting with some ``guess'') from radiative transfer models of MGS Thermal Emission Spectrometer Emission Phase Function (EPF) sequences (for details on this step, see Clancy et al., DPS 2000). Next, we perform a series of scattering calculations, adjusting our parameters to arrive at a ``best-fit'' theoretical phase function. In this presentation, we provide details on the second step in our analysis, including the derived phase functions (for several characteristic EPF sequences) as well as the particle properties of the best-fit theoretical models. We provide a sensitivity analysis for the EPF model-data comparisons in terms of perturbations in the particle properties (i.e., range of axial ratios, sizes, refractive indices, etc). This work is supported through NASA grant NAGS-9820 (MJW) and JPL contract no. 961471 (RTC).

  6. How Well Can We Detect Lineage-Specific Diversification-Rate Shifts? A Simulation Study of Sequential AIC Methods.

    PubMed

    May, Michael R; Moore, Brian R

    2016-11-01

    Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  7. How Well Can We Detect Lineage-Specific Diversification-Rate Shifts? A Simulation Study of Sequential AIC Methods

    PubMed Central

    May, Michael R.; Moore, Brian R.

    2016-01-01

    Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified ≈30% of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers—in order to clarify whether these methods can make reliable inferences from empirical datasets—and to theoretical biologists—in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.] PMID:27037081

  8. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  9. Noncompound nucleus decay contribution in the 12C+93Nb reaction using various formulations of nuclear proximity potential

    NASA Astrophysics Data System (ADS)

    Chopra, Sahila; Kaur, Arshdeep; Gupta, Raj K.

    2015-01-01

    The earlier study of excitation functions of *105Ag, formed in the 12C+93Nb reaction, based on the dynamical cluster-decay model (DCM), using the pocket formula for nuclear proximity potential is extended to the use of other nuclear interaction potentials derived from the Skyrme energy density functional (SEDF) based on the semiclassical extended Thomas Fermi (ETF) approach and to the use of the extended-Wong model of Gupta and collaborators. The Skyrme forces used are the old SIII and SIV and the new SSk, GSkI, and KDE0(v1) given for both normal and isospin-rich nuclei, with densities added in the frozen-density approximation. Taking advantage of the fact that different Skyrme forces provide different barrier characteristics, we look for the "barrier modification" effects in terms of choosing an appropriate force and hence for the existence or nonexistence of noncompound nucleus (nCN) effects in this reaction. Interestingly, independent of the choice of Skyrme or proximity force, the extended-Wong model fits the experimental data nicely, without any barrier modification and hence no nCN component in the measured fusion cross section, which consists of light-particle evaporation residue (ER) and intermediate-mass fragments (IMFs) up to mass 13, i.e., σfusionExpt .=σER+σIMFs . However, the predicted fusion cross section due to the extended-Wong model is much larger, possibly because of the so-far missing fusion-fission (ff) component in the data. On the other hand, in agreement with the earlier work using the pocket proximity potential, the DCM fits only some data (mainly IMFs) for only some Skyrme forces, and hence it presents the chosen reaction as a case of a large nCN component, whose empirically estimated content is fitted for use of the DCM with a fragment preformation factor taken equal to one, i.e., using DCM (P0=1 ), by introducing "barrier modification" through changing the neck-length parameter Δ R for a best fit to the empirical nCN data in each (ER and IMF) decay channel. Also, the ff component of the DCM is predicted to lie around the symmetric mass A /2 ±16 . All calculations are made for deformed and oriented coplanar nuclei.

  10. Molecular dynamics simulations of fluid cyclopropane with MP2/CBS-fitted intermolecular interaction potentials

    NASA Astrophysics Data System (ADS)

    Ho, Yen-Ching; Wang, Yi-Siang; Chao, Sheng D.

    2017-08-01

    Modeling fluid cycloalkanes with molecular dynamics simulations has proven to be a very challenging task partly because of lacking a reliable force field based on quantum chemistry calculations. In this paper, we construct an ab initio force field for fluid cyclopropane using the second-order Møller-Plesset perturbation theory. We consider 15 conformers of the cyclopropane dimer for the orientation sampling. Single-point energies at important geometries are calibrated by the coupled cluster with single, double, and perturbative triple excitation method. Dunning's correlation consistent basis sets (up to aug-cc-pVTZ) are used in extrapolating the interaction energies at the complete basis set limit. The force field parameters in a 9-site Lennard-Jones model are regressed by the calculated interaction energies without using empirical data. With this ab initio force field, we perform molecular dynamics simulations of fluid cyclopropane and calculate both the structural and dynamical properties. We compare the simulation results with those using an empirical force field and obtain a quantitative agreement for the detailed atom-wise radial distribution functions. The experimentally observed gross radial distribution function (extracted from the neutron scattering measurements) is well reproduced in our simulation. Moreover, the calculated self-diffusion coefficients and shear viscosities are in good agreement with the experimental data over a wide range of thermodynamic conditions. To the best of our knowledge, this is the first ab initio force field which is capable of competing with empirical force fields for simulating fluid cyclopropane.

  11. On the Deduction of Galactic Abundances with Evolutionary Neural Networks

    NASA Astrophysics Data System (ADS)

    Taylor, M.; Diaz, A. I.

    2007-12-01

    A growing number of indicators are now being used with some confidence to measure the metallicity(Z) of photoionisation regions in planetary nebulae, galactic HII regions(GHIIRs), extra-galactic HII regions(EGHIIRs) and HII galaxies(HIIGs). However, a universal indicator valid also at high metallicities has yet to be found. Here, we report on a new artificial intelligence-based approach to determine metallicity indicators that shows promise for the provision of improved empirical fits. The method hinges on the application of an evolutionary neural network to observational emission line data. The network's DNA, encoded in its architecture, weights and neuron transfer functions, is evolved using a genetic algorithm. Furthermore, selection, operating on a set of 10 distinct neuron transfer functions, means that the empirical relation encoded in the network solution architecture is in functional rather than numerical form. Thus the network solutions provide an equation for the metallicity in terms of line ratios without a priori assumptions. Tapping into the mathematical power offered by this approach, we applied the network to detailed observations of both nebula and auroral emission lines from 0.33μ m-1μ m for a sample of 96 HII-type regions and we were able to obtain an empirical relation between Z and S_{23} with a dispersion of only 0.16 dex. We show how the method can be used to identify new diagnostics as well as the nonlinear relationship supposed to exist between the metallicity Z, ionisation parameter U and effective (or equivalent) temperature T*.

  12. Government Career Interests, Perceptions of Fit, and Degree Orientations: Exploring Their Relationship in Public Administration Graduate Programs

    ERIC Educational Resources Information Center

    Bright, Leonard

    2018-01-01

    Scholars have long suggested that the degree orientations of public administration programs were related to the attitudes and behaviors of students, even though empirical research had failed to confirm this relationship. The purpose of this study was to re-examine this question from the standpoint of perceptions of fit. Using a sample of…

  13. Problematising "Education" and "Training" in the Scottish Sport and Fitness, Play and Outdoor Sectors

    ERIC Educational Resources Information Center

    Foley, M.; Frew, M.; McGillivray, D.; McIntosh, A.; McPherson, G.

    2004-01-01

    Sets out the issues peculiar to the Scottish workforce in sport and fitness, play and the outdoor sectors. Provides an exploration of the development of vocational education in the form of sector skills training for these sectors in opposition to that formal education provided at further and higher education level. Draws on empirical research…

  14. Computational design of hepatitis C vaccines using empirical fitness landscapes and population dynamics

    NASA Astrophysics Data System (ADS)

    Hart, Gregory; Ferguson, Andrew

    Hepatitis C virus (HCV) afflicts 170 million people and kills 350,000 annually. Vaccination offers the most realistic and cost effective hope of controlling this epidemic. Despite 25 years of research, no vaccine is available. A major obstacle is the virus' extreme genetic variability and rapid mutational escape from immune pressure. Improvements in the vaccine design process are urgently needed. Coupling data mining and maximum entropy inference, we have developed a computational approach to translate sequence databases into empirical fitness landscapes. These landscapes explicitly connect viral genotype to phenotypic fitness and reveal vulnerable targets that can be exploited to rationally design vaccines. These landscapes represent the mutational ''playing field'' over which the virus evolves. We have integrated them with agent-based models of the viral mutational and host immune response, establishing a data-driven multi-scale immune simulator. We have used this simulator to perform in silico screening of HCV immunogens to rationally design vaccines to both cripple viral fitness and block escape. By systematically identifying a small number of promising vaccine candidates, these models can accelerate the search for a vaccine by massively reducing the experimental search space.

  15. Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.

    2015-04-01

    The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.

  16. Neutron-antineutron oscillations in nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dover, C.B.; Gal, A.; Richard, J.M.

    1983-03-01

    We present calculations of the neutron-antineutron (n-n-bar) annihilation lifetime T in deuterium, /sup 16/O, and /sup 56/Fe in terms of the free-space oscillation time tau/sub n/n-bar. The coupled Schroedinger equations for the n and n-bar wave functions in a nucleus are solved numerically, using a realistic shell-model potential which fits the empirical binding energies of the neu- p tron orbits, and a complex n-bar-nucleus optical potential obtained from fits to p-bar-atom level shifts. Most previous estimates of T in nuclei, which exhibit large variations, are found to be quite inaccurate. When the nuclear-physics aspects of the problem are handled properlymore » (in particular, the finite neutron binding, the nuclear radius, and the surface diffuseness), the results are found to be rather stable with respect to allowable changes in the parameters of the nuclear model. We conclude that experimental limits on T in nuclei can be used to give reasonably precise constraints on tau/sub n/n-bar: T>10/sup 30/ or 10/sup 31/ yr leads to tau/sub n/n-bar>(1.5--2) x 10/sup 7/ or (5--6) x 10/sup 7/ sec, respectively.« less

  17. An introduction to multidimensional measurement using Rasch models.

    PubMed

    Briggs, Derek C; Wilson, Mark

    2003-01-01

    The act of constructing a measure requires a number of important assumptions. Principle among these assumptions is that the construct is unidimensional. In practice there are many instances when the assumption of unidimensionality does not hold, and where the application of a multidimensional measurement model is both technically appropriate and substantively advantageous. In this paper we illustrate the usefulness of a multidimensional approach to measurement with the Multidimensional Random Coefficient Multinomial Logit (MRCML) model, an extension of the unidimensional Rasch model. An empirical example is taken from a collection of embedded assessments administered to 541 students enrolled in middle school science classes with a hands-on science curriculum. Student achievement on these assessments are multidimensional in nature, but can also be treated as consecutive unidimensional estimates, or as is most common, as a composite unidimensional estimate. Structural parameters are estimated for each model using ConQuest, and model fit is compared. Student achievement in science is also compared across models. The multidimensional approach has the best fit to the data, and provides more reliable estimates of student achievement than under the consecutive unidimensional approach. Finally, at an interpretational level, the multidimensional approach may well provide richer information to the classroom teacher about the nature of student achievement.

  18. Sex-biased dispersal, kin selection and the evolution of sexual conflict.

    PubMed

    Faria, Gonçalo S; Varela, Susana A M; Gardner, Andy

    2015-10-01

    There is growing interest in resolving the curious disconnect between the fields of kin selection and sexual selection. Rankin's (2011, J. Evol. Biol. 24, 71-81) theoretical study of the impact of kin selection on the evolution of sexual conflict in viscous populations has been particularly valuable in stimulating empirical research in this area. An important goal of that study was to understand the impact of sex-specific rates of dispersal upon the coevolution of male-harm and female-resistance behaviours. But the fitness functions derived in Rankin's study do not flow from his model's assumptions and, in particular, are not consistent with sex-biased dispersal. Here, we develop new fitness functions that do logically flow from the model's assumptions, to determine the impact of sex-specific patterns of dispersal on the evolution of sexual conflict. Although Rankin's study suggested that increasing male dispersal always promotes the evolution of male harm and that increasing female dispersal always inhibits the evolution of male harm, we find that the opposite can also be true, depending upon parameter values. © 2015 The Authors. Journal of Evolutionary Biology published by John Wiley & Sons Ltd on behalf of European Society for Evolutionary Biology.

  19. Model misspecification detection by means of multiple generator errors, using the observed potential map.

    PubMed

    Zhang, Z; Jewett, D L

    1994-01-01

    Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.

  20. AN EMPIRICAL CALIBRATION TO ESTIMATE COOL DWARF FUNDAMENTAL PARAMETERS FROM H-BAND SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newton, Elisabeth R.; Charbonneau, David; Irwin, Jonathan

    Interferometric radius measurements provide a direct probe of the fundamental parameters of M dwarfs. However, interferometry is within reach for only a limited sample of nearby, bright stars. We use interferometrically measured radii, bolometric luminosities, and effective temperatures to develop new empirical calibrations based on low-resolution, near-infrared spectra. We find that H-band Mg and Al spectral features are good tracers of stellar properties, and derive functions that relate effective temperature, radius, and log luminosity to these features. The standard deviations in the residuals of our best fits are, respectively, 73 K, 0.027 R {sub ☉}, and 0.049 dex (an 11% error on luminosity).more » Our calibrations are valid from mid K to mid M dwarf stars, roughly corresponding to temperatures between 3100 and 4800 K. We apply our H-band relationships to M dwarfs targeted by the MEarth transiting planet survey and to the cool Kepler Objects of Interest (KOIs). We present spectral measurements and estimated stellar parameters for these stars. Parallaxes are also available for many of the MEarth targets, allowing us to independently validate our calibrations by demonstrating a clear relationship between our inferred parameters and the stars' absolute K magnitudes. We identify objects with magnitudes that are too bright for their inferred luminosities as candidate multiple systems. We also use our estimated luminosities to address the applicability of near-infrared metallicity calibrations to mid and late M dwarfs. The temperatures we infer for the KOIs agree remarkably well with those from the literature; however, our stellar radii are systematically larger than those presented in previous works that derive radii from model isochrones. This results in a mean planet radius that is 15% larger than one would infer using the stellar properties from recent catalogs. Our results confirm the derived parameters from previous in-depth studies of KOIs 961 (Kepler-42), 254 (Kepler-45), and 571 (Kepler-186), the latter of which hosts a rocky planet orbiting in its star's habitable zone.« less

  1. Exponential 6 parameterization for the JCZ3-EOS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGee, B.C.; Hobbs, M.L.; Baer, M.R.

    1998-07-01

    A database has been created for use with the Jacobs-Cowperthwaite-Zwisler-3 equation-of-state (JCZ3-EOS) to determine thermochemical equilibrium for detonation and expansion states of energetic materials. The JCZ3-EOS uses the exponential 6 intermolecular potential function to describe interactions between molecules. All product species are characterized by r*, the radius of the minimum pair potential energy, and {var_epsilon}/k, the well depth energy normalized by Boltzmann`s constant. These parameters constitute the JCZS (S for Sandia) EOS database describing 750 gases (including all the gases in the JANNAF tables), and have been obtained by using Lennard-Jones potential parameters, a corresponding states theory, pure liquid shockmore » Hugoniot data, and fit values using an empirical EOS. This database can be used with the CHEETAH 1.40 or CHEETAH 2.0 interface to the TIGER computer program that predicts the equilibrium state of gas- and condensed-phase product species. The large JCZS-EOS database permits intermolecular potential based equilibrium calculations of energetic materials with complex elemental composition.« less

  2. Retrieval of the thickness of undeformed sea ice from C-band compact polarimetric SAR images

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Dierking, W.; Zhang, J.; Meng, J. M.; Lang, H. T.

    2015-10-01

    In this paper we introduce a parameter for the retrieval of the thickness of undeformed first-year sea ice that is specifically adapted to compact polarimetric SAR images. The parameter is denoted as "CP-Ratio". In model simulations we investigated the sensitivity of CP-Ratio to the dielectric constant, thickness, surface roughness, and incidence angle. From the results of the simulations we deduced optimal conditions for the thickness retrieval. On the basis of C-band CTLR SAR data, which were generated from Radarsat-2 quad-polarization images acquired jointly with helicopter-borne sea ice thickness measurements in the region of the Sea of Labrador, we tested empirical equations for thickness retrieval. An exponential fit between CP-Ratio and ice thickness provides the most reliable results. Based on a validation using other compact polarimetric SAR images from the same region we found a root mean square (rms) error of 8 cm and a maximum correlation coefficient of 0.92 for the retrieval procedure when applying it on level ice of 0.9 m mean thickness.

  3. Tidal radii of the globular clusters M 5, M 12, M 13, M 15, M 53, NGC 5053 and NGC 5466 from automated star counts.

    NASA Astrophysics Data System (ADS)

    Lehmann, I.; Scholz, R.-D.

    1997-04-01

    We present new tidal radii for seven Galactic globular clusters using the method of automated star counts on Schmidt plates of the Tautenburg, Palomar and UK telescopes. The plates were fully scanned with the APM system in Cambridge (UK). Special account was given to a reliable background subtraction and the correction of crowding effects in the central cluster region. For the latter we used a new kind of crowding correction based on a statistical approach to the distribution of stellar images and the luminosity function of the cluster stars in the uncrowded area. The star counts were correlated with surface brightness profiles of different authors to obtain complete projected density profiles of the globular clusters. Fitting an empirical density law (King 1962) we derived the following structural parameters: tidal radius r_t_, core radius r_c_ and concentration parameter c. In the cases of NGC 5466, M 5, M 12, M 13 and M 15 we found an indication for a tidal tail around these objects (cf. Grillmair et al. 1995).

  4. VizieR Online Data Catalog: Tidal radii of 7 globular clusters (Lehmann+ 1997)

    NASA Astrophysics Data System (ADS)

    Lehmann, I.; Scholz, R.-D.

    1998-02-01

    We present new tidal radii for seven Galactic globular clusters using the method of automated star counts on Schmidt plates of the Tautenburg, Palomar and UK telescopes. The plates were fully scanned with the APM system in Cambridge (UK). Special account was given to a reliable background subtraction and the correction of crowding effects in the central cluster region. For the latter we used a new kind of crowding correction based on a statistical approach to the distribution of stellar images and the luminosity function of the cluster stars in the uncrowded area. The star counts were correlated with surface brightness profiles of different authors to obtain complete projected density profiles of the globular clusters. Fitting an empirical density law (King 1962AJ.....67..471K) we derived the following structural parameters: tidal radius rt, core radius rc and concentration parameter c. In the cases of NGC 5466, M 5, M 12, M 13 and M 15 we found an indication for a tidal tail around these objects (cf. Grillmair et al., 1995AJ....109.2553G). (1 data file).

  5. Two years of on-orbit gallium arsenide performance from the LIPS solar cell panel experiment

    NASA Technical Reports Server (NTRS)

    Francis, R. W.; Betz, F. E.

    1985-01-01

    The LIPS on-orbit performance of the gallium arsenide panel experiment was analyzed from flight operation telemetry data. Algorithms were developed to calculate the daily maximum power and associated solar array parameters by two independent methods. The first technique utilizes a least mean square polynomial fit to the power curve obtained with intensity and temperature corrected currents and voltages; whereas, the second incorporates an empirical expression for fill factor based on an open circuit voltage and the calculated series resistance. Maximum power, fill factor, open circuit voltage, short circuit current and series resistance of the solar cell array are examined as a function of flight time. Trends are analyzed with respect to possible mechanisms which may affect successive periods of output power during 2 years of flight operation. Degradation factors responsible for the on-orbit performance characteristics of gallium arsenide are discussed in relation to the calculated solar cell parameters. Performance trends and the potential degradation mechanisms are correlated with existing laboratory and flight data on both gallium arsenide and silicon solar cells for similar environments.

  6. Quantifying the origin of metallic glass formation

    NASA Astrophysics Data System (ADS)

    Johnson, W. L.; Na, J. H.; Demetriou, M. D.

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum τX* at a `nose temperature' T* located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that τX* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless `fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining τX*. Here we survey and assess reported data for TL, Tg, trg, m and τX* for a broad range of metallic glasses with widely varying τX*. By analysing this database, we derive a simple empirical expression for τX*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict τX* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for τX*.

  7. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    PubMed Central

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-01-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233

  8. A Semi-Empirical Model for Forecasting Relativistic Electrons at Geostationary Orbit

    NASA Technical Reports Server (NTRS)

    Lyatsky, Wladislaw; Khazanov, George V.

    2008-01-01

    We developed a new prediction model for forecasting relativistic (>2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/Interplanetary Magnetic Field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is about 0.9. The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible. The correlation coefficient between predicted and actual electron fluxes is stable and incredibly high.

  9. CNV detection method optimized for high-resolution arrayCGH by normality test.

    PubMed

    Ahn, Jaegyoon; Yoon, Youngmi; Park, Chihyun; Park, Sanghyun

    2012-04-01

    High-resolution arrayCGH platform makes it possible to detect small gains and losses which previously could not be measured. However, current CNV detection tools fitted to early low-resolution data are not applicable to larger high-resolution data. When CNV detection tools are applied to high-resolution data, they suffer from high false-positives, which increases validation cost. Existing CNV detection tools also require optimal parameter values. In most cases, obtaining these values is a difficult task. This study developed a CNV detection algorithm that is optimized for high-resolution arrayCGH data. This tool operates up to 1500 times faster than existing tools on a high-resolution arrayCGH of whole human chromosomes which has 42 million probes whose average length is 50 bases, while preserving false positive/negative rates. The algorithm also uses a normality test, thereby removing the need for optimal parameters. To our knowledge, this is the first formulation for CNV detecting problems that results in a near-linear empirical overall complexity for real high-resolution data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Flow properties of the solar wind obtained from white light data and a two-fluid model

    NASA Technical Reports Server (NTRS)

    Habbal, Shadia Rifai; Esser, Ruth; Guhathakurta, Madhulika; Fisher, Richard

    1994-01-01

    The flow properties of the solar wind from 1 R(sub s) to 1 AU were obtained using a two fluid model constrained by density and scale height temperatures derived from white light observations, as well as knowledge of the electron temperature in coronal holes. The observations were obtained with the white light coronographs on SPARTAN 201-1 and at Mauna Loa (Hawaii), in a north polar coronal hole from 1.16 to 5.5 R(sub s) on 11 Apr. 1993. By specifying the density, temperature, Alfven wave velocity amplitude and heating function at the coronal base, it was found that the model parameters fit well the constraints of the empirical density profiles and temperatures. The optimal range of the input parameters was found to yield a higher proton temperature than electron temperature in the inner corona. The results indicate that no preferential heating of the protons at larger distances is needed to produce higher proton than electron temperatures at 1 AU, as observed in the high speed solar wind.

  11. Modeling and forecasting the distribution of Vibrio vulnificus in Chesapeake Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, John M.; Rhodes, M.; Brown, C. W.

    The aim is to construct statistical models to predict the presence, abundance and potential virulence of Vibrio vulnificus in surface waters. A variety of statistical techniques were used in concert to identify water quality parameters associated with V. vulnificus presence, abundance and virulence markers in the interest of developing strong predictive models for use in regional oceanographic modeling systems. A suite of models are provided to represent the best model fit and alternatives using environmental variables that allow them to be put to immediate use in current ecological forecasting efforts. Conclusions: Environmental parameters such as temperature, salinity and turbidity aremore » capable of accurately predicting abundance and distribution of V. vulnificus in Chesapeake Bay. Forcing these empirical models with output from ocean modeling systems allows for spatially explicit forecasts for up to 48 h in the future. This study uses one of the largest data sets compiled to model Vibrio in an estuary, enhances our understanding of environmental correlates with abundance, distribution and presence of potentially virulent strains and offers a method to forecast these pathogens that may be replicated in other regions.« less

  12. Solar radiation over Egypt: Comparison of predicted and measured meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamel, M.A.; Shalaby, S.A.; Mostafa, S.S.

    1993-06-01

    Measurements of global solar irradiance on a horizontal surface at five meteorological stations in Egypt for three years 1987, 1988, and 1989 are compared with their corresponding values computed by two independent methods. The first method is based on the Angstrom formula, which correlates relative solar irradiance H/H[sub o] to corresponding relative duration of bright sunshine n/N. Regional regression coefficients are obtained and used for prediction of global solar irradiance. Good agreement with measurements is obtained. In the second method an empirical relation, in which sunshine duration and the noon altitude of the sun as inputs together with appropriate choicemore » of zone parameters, is employed. This gives good agreement with the measurements. Comparison shows that the first method gives better fitting with the experimental data.« less

  13. Dispersion correction derived from first principles for density functional theory and Hartree-Fock theory.

    PubMed

    Guidez, Emilie B; Gordon, Mark S

    2015-03-12

    The modeling of dispersion interactions in density functional theory (DFT) is commonly performed using an energy correction that involves empirically fitted parameters for all atom pairs of the system investigated. In this study, the first-principles-derived dispersion energy from the effective fragment potential (EFP) method is implemented for the density functional theory (DFT-D(EFP)) and Hartree-Fock (HF-D(EFP)) energies. Overall, DFT-D(EFP) performs similarly to the semiempirical DFT-D corrections for the test cases investigated in this work. HF-D(EFP) tends to underestimate binding energies and overestimate intermolecular equilibrium distances, relative to coupled cluster theory, most likely due to incomplete accounting for electron correlation. Overall, this first-principles dispersion correction yields results that are in good agreement with coupled-cluster calculations at a low computational cost.

  14. Changes in impedance of Ni/Cd cells with voltage and cycle life

    NASA Technical Reports Server (NTRS)

    Reid, Margaret A.

    1992-01-01

    Impedances of aerospace design Super Ni/Cd cells are being measured as functions of voltage and number of cycles. The cells have been cycled over 4400 cycles to date. Analysis of the impedance data has been made using a number of equivalent circuits. The model giving the best fit over the whole range of voltage has a parallel circuit of a kinetic resistance and a constant phase element in series with the ohmic resistance. The values for the circuit elements have been treated as empirical parameters, and no attempt has been made as yet to correlate them with physical and chemical changes in the electrode. No significant changes have been seen as yet with the exception of a decrease in kinetic resistance at low states of charge in the first 500 cycles.

  15. Badhwar - O'Neill 2014 Galactic Cosmic Ray Flux Model Description

    NASA Technical Reports Server (NTRS)

    O'Neill, P. M.; Golge, S.; Slaba, T. C.

    2014-01-01

    The Badhwar-O'Neill (BON) Galactic Cosmic Ray (GCR) model is based on GCR measurements from particle detectors. The model has mainly been used by NASA to certify microelectronic systems and the analysis of radiation health risks to astronauts in space missions. The BON14 model numerically solves the Fokker-Planck differential equation to account for particle transport in the heliosphere due to diffusion, convection, and adiabatic deceleration under the assumption of a spherically symmetric heliosphere. The model also incorporates an empirical time delay function to account for the lag of the solar activity to reach the boundary of the heliosphere. This technical paper describes the most recent improvements in parameter fits to the BON model (BON14). Using a comprehensive measurement database, it is shown that BON14 is significantly improved over the previous version, BON11.

  16. Superconducting cosmic string loops as sources for fast radio bursts

    NASA Astrophysics Data System (ADS)

    Cao, Xiao-Feng; Yu, Yun-Wei

    2018-01-01

    The cusp burst radiation of superconducting cosmic string (SCS) loops is thought to be a possible origin of observed fast radio bursts with the model-predicted radiation spectrum and the redshift- and energy-dependent event rate, we fit the observational redshift and energy distributions of 21 Parkes fast radio bursts and constrain the model parameters. It is found that the model can basically be consistent with the observations, if the current on the SCS loops has a present value of ˜1016μ179 /10 esu s-1 and evolves with redshift as an empirical power law ˜(1 +z )-1.3 , where μ17=μ /1017 g cm-1 is the string tension. This current evolution may provide a clue to probe the evolution of the cosmic magnetic fields and the gathering of the SCS loops to galaxy clusters.

  17. Hybrid functional study of band structures of GaAs1-xNx and GaSb1-xNx alloys

    NASA Astrophysics Data System (ADS)

    Virkkala, Ville; Havu, Ville; Tuomisto, Filip; Puska, Martti J.

    2012-02-01

    Band structures of GaAs1-xNx and GaSb1-xNx alloys are studied in the framework of the density functional theory within the hybrid functional scheme (HSE06). We find that the scheme gives a clear improvement over the traditional (semi)local functionals in describing, in a qualitative agreement with experiments, the bowing of electron energy band gap in GaAs1-xNx alloys. In the case of GaSb1-xNx alloys, the hybrid functional used makes the study of band structures possible ab initio without any empirical parameter fitting. We explain the trends in the band gap reductions in the two materials that result mainly from the positions of the nitrogen-induced states with respect to the bottoms of the bulk conduction bands.

  18. Reduced arterial stiffness in very fit boys and girls.

    PubMed

    Weberruß, Heidi; Pirzer, Raphael; Schulz, Thorsten; Böhm, Birgit; Dalla Pozza, Robert; Netz, Heinrich; Oberhoffer, Renate

    2017-01-01

    Low cardiorespiratory fitness is associated with higher cardiovascular risk, whereas high levels of cardiorespiratory fitness protect the cardiovascular system. Carotid intima-media thickness and arterial distensibility are well-established parameters to identify subclinical cardiovascular disease. Therefore, this study investigated the influence of cardiorespiratory fitness and muscular strength on carotid intima-media thickness and arterial distensibility in 697 children and adolescents (376 girls), aged 7-17 years. Cardiorespiratory fitness and strength were measured with the test battery FITNESSGRAM; carotid intima-media thickness, arterial compliance, elastic modulus, stiffness index β, and pulse wave velocity β were assessed by B- and M-mode ultrasound at the common carotid artery. In bivariate correlation, cardiorespiratory fitness was significantly associated with all cardiovascular parameters and was an independent predictor in multivariate regression analysis. No significant associations were obtained for muscular strength. In a one-way variance analysis, very fit boys and girls (58 boys and 74 girls>80th percentile for cardiorespiratory fitness) had significantly decreased stiffness parameters (expressed in standard deviation scores) compared with low fit subjects (71 boys and 77 girls<20th percentile for cardiorespiratory fitness): elastic modulus -0.16±1.02 versus 0.19±1.17, p=0.009; stiffness index β -0.15±1.08 versus 0.16±1.1, p=0.03; and pulse wave velocity β -0.19±1.02 versus 0.19±1.14, p=0.005. Cardiorespiratory fitness was associated with healthier arteries in children and adolescents. Comparison of very fit with unfit subjects revealed better distensibility parameters in very fit boys and girls.

  19. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  20. A Simple Model for Fine Structure Transitions in Alkali-Metal Noble-Gas Collisions

    DTIC Science & Technology

    2015-03-01

    63 33 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for KHe, KNe, and KAr...64 ix Figure Page 34 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for RbHe, RbNe, and...RbAr . . . . . . . . . . . . . . . . . . . . . . . . . 64 35 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for CsHe, CsNe, and CsAr

  1. Parameter Estimation as a Problem in Statistical Thermodynamics.

    PubMed

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  2. Simple and Reliable Determination of Intravoxel Incoherent Motion Parameters for the Differential Diagnosis of Head and Neck Tumors

    PubMed Central

    Sasaki, Miho; Sumi, Misa; Eida, Sato; Katayama, Ikuo; Hotokezaka, Yuka; Nakamura, Takashi

    2014-01-01

    Intravoxel incoherent motion (IVIM) imaging can characterize diffusion and perfusion of normal and diseased tissues, and IVIM parameters are authentically determined by using cumbersome least-squares method. We evaluated a simple technique for the determination of IVIM parameters using geometric analysis of the multiexponential signal decay curve as an alternative to the least-squares method for the diagnosis of head and neck tumors. Pure diffusion coefficients (D), microvascular volume fraction (f), perfusion-related incoherent microcirculation (D*), and perfusion parameter that is heavily weighted towards extravascular space (P) were determined geometrically (Geo D, Geo f, and Geo P) or by least-squares method (Fit D, Fit f, and Fit D*) in normal structures and 105 head and neck tumors. The IVIM parameters were compared for their levels and diagnostic abilities between the 2 techniques. The IVIM parameters were not able to determine in 14 tumors with the least-squares method alone and in 4 tumors with the geometric and least-squares methods. The geometric IVIM values were significantly different (p<0.001) from Fit values (+2±4% and −7±24% for D and f values, respectively). Geo D and Fit D differentiated between lymphomas and SCCs with similar efficacy (78% and 80% accuracy, respectively). Stepwise approaches using combinations of Geo D and Geo P, Geo D and Geo f, or Fit D and Fit D* differentiated between pleomorphic adenomas, Warthin tumors, and malignant salivary gland tumors with the same efficacy (91% accuracy = 21/23). However, a stepwise differentiation using Fit D and Fit f was less effective (83% accuracy = 19/23). Considering cumbersome procedures with the least squares method compared with the geometric method, we concluded that the geometric determination of IVIM parameters can be an alternative to least-squares method in the diagnosis of head and neck tumors. PMID:25402436

  3. Photometric Modeling of Simulated Surace-Resolved Bennu Images

    NASA Astrophysics Data System (ADS)

    Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.

    2017-12-01

    The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the completeness of the data set for evaluating the phase and disk functions of the surface. Application of this software to simulated mission data has revealed limitations in the initial mission design, which has fed back into the planning process. The entire photometric pipeline further serves as an exercise of planned activities for proximity operations.

  4. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, J.; Winkler, J.; Christensen, D.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputsmore » for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.« less

  5. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-05

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.

  6. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randolph, B.

    Composite liners have been fabricated for the Los Alamos liner driven HEDP experiments using impactors formed by physical vapor deposition (PVD), electroplating, machining and shrink fitting. Chemical vapor deposition (CVD) has been proposed for some ATLAS liner applications. This paper describes the processes used to fabricate machined and shrink fitted impactors which have been used for copper impactors in 1100 aluminum liners and 6061 T-6 aluminum impactors in 1100 aluminum liners. The most successful processes have been largely empirically developed and rely upon a combination of shrink fitted and light press fitting. The processes used to date will be describedmore » along with some considerations for future composite liners requirements in the HEDP Program.« less

  8. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    PubMed

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  9. Benchmarking test of empirical root water uptake models

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman

    2017-01-01

    Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.

  10. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    PubMed

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  11. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    PubMed Central

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941

  12. There is no fitness but fitness, and the lineage is its bearer

    PubMed Central

    2016-01-01

    Inclusive fitness has been the cornerstone of social evolution theory for more than a half-century and has matured as a mathematical theory in the past 20 years. Yet surprisingly for a theory so central to an entire field, some of its connections to evolutionary theory more broadly remain contentious or underappreciated. In this paper, we aim to emphasize the connection between inclusive fitness and modern evolutionary theory through the following fact: inclusive fitness is simply classical Darwinian fitness, averaged over social, environmental and demographic states that members of a gene lineage experience. Therefore, inclusive fitness is neither a generalization of classical fitness, nor does it belong exclusively to the individual. Rather, the lineage perspective emphasizes that evolutionary success is determined by the effect of selection on all biological and environmental contexts that a lineage may experience. We argue that this understanding of inclusive fitness based on gene lineages provides the most illuminating and accurate picture and avoids pitfalls in interpretation and empirical applications of inclusive fitness theory. PMID:26729925

  13. Empirical expression for DC magnetization curve of immobilized magnetic nanoparticles for use in biomedical applications

    NASA Astrophysics Data System (ADS)

    Elrefai, Ahmed L.; Sasayama, Teruyoshi; Yoshida, Takashi; Enpuku, Keiji

    2018-05-01

    We studied the magnetization (M-H) curve of immobilized magnetic nanoparticles (MNPs) used for biomedical applications. First, we performed numerical simulation on the DC M-H curve over a wide range of MNPs parameters. Based on the simulation results, we obtained an empirical expression for DC M-H curve. The empirical expression was compared with the measured M-H curves of various MNP samples, and quantitative agreements were obtained between them. We can also estimate the basic parameters of MNP from the comparison. Therefore, the empirical expression is useful for analyzing the M-H curve of immobilized MNPs for specific biomedical applications.

  14. Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.

    PubMed

    Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir

    2018-04-01

    In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.

  15. Investigation of the relationships between the thermodynamic phase behavior and gelation behavior of a series of tripodal trisamide compounds

    NASA Astrophysics Data System (ADS)

    Feng, Li

    Low molecular weight organic gelators(LMOGs) are important due to potential applications in many fields. Currently, most of the major studies focus on the empirical explanation of the crystallization for gelator assembly formation and morphologies, few efforts have been devoted to the thermodynamic phase behaviors and the effect of the non-ideal solution behavior on the structure of the resultant gels. In this research, tripodal trisamide compounds, synthesized from tris(2-aminoethyl)amine (TREN) by condensation with different acid chlorides, were studied as model LMOGs due to the simple one-step reaction and the commercially available chemical reactants. Gelation of organic solvents was investigated as a function of concentration and solvent solubility parameter.It has been found that the introduction of branches or cyclic units have dramatically improves the gelation ability compared to linear alkyl peripheral units. Fitting the liquidus lines using the regular solution model and calculation of the trisamide solubility parameter using solubility parameter theory gave good agreement with the trisamide solubility parameter calculated by group contribution methods. These results demonstrate that non-ideal solution behavior is an important factor in the gelation behavior of low molecular mass organic gelators. Understanding and controlling the thermodynamics and phase behaviors of the gel systems will provide effective ways to produce new efficient LMOGs in the future.

  16. A Survey of Xenon Ion Sputter Yield Data and Fits Relevant to Electric Propulsion Spacecraft Integration

    NASA Technical Reports Server (NTRS)

    Yim, John T.

    2017-01-01

    A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.

  17. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  18. Quantifying Confidence in Model Predictions for Hypersonic Aircraft Structures

    DTIC Science & Technology

    2015-03-01

    of isolating calibrations of models in the network, segmented and simultaneous calibration are compared using the Kullback - Leibler ...value of θ. While not all test -statistics are as simple as measuring goodness or badness of fit , their directional interpretations tend to remain...data quite well, qualitatively. Quantitative goodness - of - fit tests are problematic because they assume a true empirical CDF is being tested or

  19. Achieving Best-Fit Configurations through Advisory Subsystems in AKIS: Case Studies of Advisory Service Provisioning for Diverse Types of Farmers in Norway

    ERIC Educational Resources Information Center

    Klerkx, Laurens; Straete, Egil Petter; Kvam, Gunn-Turid; Ystad, Eystein; Butli Hårstad, Renate Marie

    2017-01-01

    Purpose: In light of the discussion on "best-fit" in pluralistic advisory systems, this article aims to present and discuss challenges for advisory services in serving various types of farmers when they seek and acquire farm business advice. Design/methodology/approach: The empirical basis is data derived from four workshops, five…

  20. Approximation of the breast height diameter distribution of two-cohort stands by mixture models II Goodness-of-fit tests

    Treesearch

    Rafal Podlaski; Francis .A. Roesch

    2013-01-01

    The goals of this study are (1) to analyse the accuracy of the approximation of empirical distributions of diameter at breast height (dbh) using two-component mixtures of either the Weibull distribution or the gamma distribution in two−cohort stands, and (2) to discuss the procedure of choosing goodness−of−fit tests. The study plots were...

  1. Dynamic properties in the four-state haploid coupled discrete-time mutation-selection model with an infinite population limit

    NASA Astrophysics Data System (ADS)

    Lee, Kyu Sang; Gill, Wonpyong

    2017-11-01

    The dynamic properties, such as the crossing time and time-dependence of the relative density of the four-state haploid coupled discrete-time mutation-selection model, were calculated with the assumption that μ ij = μ ji , where μ ij denotes the mutation rate between the sequence elements, i and j. The crossing time for s = 0 and r 23 = r 42 = 1 in the four-state model became saturated at a large fitness parameter when r 12 > 1, was scaled as a power law in the fitness parameter when r 12 = 1, and diverged when the fitness parameter approached the critical fitness parameter when r 12 < 1, where r ij = μ ij / μ 14.

  2. How an Organization's Environmental Orientation Impacts Environmental Performance and Its Resultant Financial Performance through Green Computing Hiring Practices: An Empirical Investigation of the Natural Resource-Based View of the Firm

    ERIC Educational Resources Information Center

    Aken, Andrew Joseph

    2010-01-01

    This dissertation uses the logic embodied in Strategic Fit Theory, the Natural Resource- Based View of the Firm (NRBV), strategic human resource management, and other relevant literature streams to empirically demonstrate how the environmental orientation of a firm's strategy impacts their environmental performance and resultant financial…

  3. Using Office Discipline Referral Data for Decision Making about Student Behavior in Elementary and Middle Schools: An Empirical Evaluation of Validity

    ERIC Educational Resources Information Center

    Irvin, Larry K.; Horner, Robert H.; Ingram, Kimberly; Todd, Anne W.; Sugai, George; Sampson, Nadia Katul; Boland, Joseph B.

    2006-01-01

    In this evaluation we used Messick's construct validity as a conceptual framework for an empirical study assessing the validity of use, utility, and impact of office discipline referral (ODR) measures for data-based decision making about student behavior in schools. The Messick approach provided a rubric for testing the fit of our theory of use of…

  4. On the use of the covariance matrix to fit correlated data

    NASA Astrophysics Data System (ADS)

    D'Agostini, G.

    1994-07-01

    Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.

  5. Consumer involvement in seafood as family meals in Norway: an application of the expectancy-value approach.

    PubMed

    Olsen, S O

    2001-04-01

    A theoretical model of involvement in consumption of food products was tested in a representative survey of Norwegian households for the particular case of consuming seafood as a common family meal. The empirical study is based on using structural equation approach to test construct validity of measures and the empirical fit of the theoretical model. Attitudes, negative feelings, social norms and moral obligation were proved to be important, reliable and different constructs and explained 63% of the variation in seafood involvement. Negative feelings and moral obligation was the most important antecedents of involvement. Both our proposed model and modified model with seafood involvement as a mediator fit well with the data and proved our expectations in a promising way. Copyright 2001 Academic Press.

  6. INFOS: spectrum fitting software for NMR analysis.

    PubMed

    Smith, Albert A

    2017-02-01

    Software for fitting of NMR spectra in MATLAB is presented. Spectra are fitted in the frequency domain, using Fourier transformed lineshapes, which are derived using the experimental acquisition and processing parameters. This yields more accurate fits compared to common fitting methods that use Lorentzian or Gaussian functions. Furthermore, a very time-efficient algorithm for calculating and fitting spectra has been developed. The software also performs initial peak picking, followed by subsequent fitting and refinement of the peak list, by iteratively adding and removing peaks to improve the overall fit. Estimation of error on fitting parameters is performed using a Monte-Carlo approach. Many fitting options allow the software to be flexible enough for a wide array of applications, while still being straightforward to set up with minimal user input.

  7. Empirical fitness models for hepatitis C virus immunogen design

    NASA Astrophysics Data System (ADS)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-12-01

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.

  8. Parameter estimation and forecasting for multiplicative log-normal cascades

    NASA Astrophysics Data System (ADS)

    Leövey, Andrés E.; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  9. The soil water characteristic as new class of closed-form parametric expressions for the flow duration curve

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Vrugt, J. A.; Gupta, H. V.; Xu, C.

    2016-04-01

    The flow duration curve is a signature catchment characteristic that depicts graphically the relationship between the exceedance probability of streamflow and its magnitude. This curve is relatively easy to create and interpret, and is used widely for hydrologic analysis, water quality management, and the design of hydroelectric power plants (among others). Several mathematical expressions have been proposed to mimic the FDC. Yet, these efforts have not been particularly successful, in large part because available functions are not flexible enough to portray accurately the functional shape of the FDC for a large range of catchments and contrasting hydrologic behaviors. Here, we extend the work of Vrugt and Sadegh (2013) and introduce several commonly used models of the soil water characteristic as new class of closed-form parametric expressions for the flow duration curve. These soil water retention functions are relatively simple to use, contain between two to three parameters, and mimic closely the empirical FDCs of 430 catchments of the MOPEX data set. We then relate the calibrated parameter values of these models to physical and climatological characteristics of the watershed using multivariate linear regression analysis, and evaluate the regionalization potential of our proposed models against those of the literature. If quality of fit is of main importance then the 3-parameter van Genuchten model is preferred, whereas the 2-parameter lognormal, 3-parameter GEV and generalized Pareto models show greater promise for regionalization.

  10. Event-scale power law recession analysis: quantifying methodological uncertainty

    NASA Astrophysics Data System (ADS)

    Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.

    2017-01-01

    The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.

  11. Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process

    PubMed Central

    Finley, Benjamin J.; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621

  12. Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.

    PubMed

    Finley, Benjamin J; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.

  13. Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters

    NASA Astrophysics Data System (ADS)

    Selyutina, N. S.; Petrov, Yu. V.

    2018-02-01

    The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.

  14. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel

    2014-07-01

    We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a surrogate. As waveform generation is one of the dominant costs in parameter estimation algorithms and parameter space exploration, surrogate models offer a new and practical way to dramatically accelerate such studies without impacting accuracy. Surrogates built in this paper, as well as others, are available from GWSurrogate, a publicly available python package.

  15. VizieR Online Data Catalog: Activity cycles in 3203 Kepler stars (Reinhold+, 2017)

    NASA Astrophysics Data System (ADS)

    Reinhold, T.; Cameron, R. H.; Gizon, L.

    2017-05-01

    Rvar time series, sine fit parameters, mean rotation periods, and false alarm probabilities of all 3203 Kepler stars are presented. For simplicity, the KIC number and the fit parameters of a certain star are repeated in each line. The fit function to the Rvar(t) time series equals y_fit=Acyc*sin(2*pi/(Pcyc*365)*(t-t0))+Offset. (2 data files).

  16. An empirical model of human aspiration in low-velocity air using CFD investigations.

    PubMed

    Anthony, T Renée; Anderson, Kimberly R

    2015-01-01

    Computational fluid dynamics (CFD) modeling was performed to investigate the aspiration efficiency of the human head in low velocities to examine whether the current inhaled particulate mass (IPM) sampling criterion matches the aspiration efficiency of an inhaling human in airflows common to worker exposures. Data from both mouth and nose inhalation, averaged to assess omnidirectional aspiration efficiencies, were compiled and used to generate a unifying model to relate particle size to aspiration efficiency of the human head. Multiple linear regression was used to generate an empirical model to estimate human aspiration efficiency and included particle size as well as breathing and freestream velocities as dependent variables. A new set of simulated mouth and nose breathing aspiration efficiencies was generated and used to test the fit of empirical models. Further, empirical relationships between test conditions and CFD estimates of aspiration were compared to experimental data from mannequin studies, including both calm-air and ultra-low velocity experiments. While a linear relationship between particle size and aspiration is reported in calm air studies, the CFD simulations identified a more reasonable fit using the square of particle aerodynamic diameter, which better addressed the shape of the efficiency curve's decline toward zero for large particles. The ultimate goal of this work was to develop an empirical model that incorporates real-world variations in critical factors associated with particle aspiration to inform low-velocity modifications to the inhalable particle sampling criterion.

  17. Reproducibility of isopach data and estimates of dispersal and eruption volumes

    NASA Astrophysics Data System (ADS)

    Klawonn, M.; Houghton, B. F.; Swanson, D.; Fagents, S. A.; Wessel, P.; Wolfe, C. J.

    2012-12-01

    Total erupted volume and deposit thinning relationships are key parameters in characterizing explosive eruptions and evaluating the potential risk from a volcano as well as inputs to volcanic plume models. Volcanologists most commonly estimate these parameters by hand-contouring deposit data, then representing these contours in thickness versus square root area plots, fitting empirical laws to the thinning relationships and integrating over the square root area to arrive at volume estimates. In this study we analyze the extent to which variability in hand-contouring thickness data for pyroclastic fall deposits influences the resulting estimates and investigate the effects of different fitting laws. 96 volcanologists (3% MA students, 19% PhD students, 20% postdocs, 27% professors, and 30% professional geologists) from 11 countries (Australia, Ecuador, France, Germany, Iceland, Italy, Japan, New Zealand, Switzerland, UK, USA) participated in our study and produced hand-contours on identical maps using our unpublished thickness measurements of the Kilauea Iki 1959 fall deposit. We computed volume estimates by (A) integrating over a surface fitted through the contour lines, as well as using the established methods of integrating over the thinning relationships of (B) an exponential fit with one to three segments, (C) a power law fit, and (D) a Weibull function fit. To focus on the differences from the hand-contours of the well constrained deposit and eliminate the effects of extrapolations to great but unmeasured thicknesses near the vent, we removed the volume contribution of the near vent deposit (defined as the deposit above 3.5 m) from the volume estimates. The remaining volume approximates to 1.76 *106 m3 (geometric mean for all methods) with maximum and minimum estimates of 2.5 *106 m3 and 1.1 *106 m3. Different integration methods of identical isopach maps result in volume estimate differences of up to 50% and, on average, maximum variation between integration methods of 14%. Volume estimates with methods (A), (C) and (D) show strong correlation (r = 0.8 to r = 0.9), while correlation of (B) with the other methods is weaker (r = 0.2 to r = 0.6) and correlation between (B) and (C) is not statistically significant. We find that the choice of larger maximum contours leads to smaller volume estimates due to method (C), but larger estimates with the other methods. We do not find statistically significant correlation between volume estimations and participants experience level, number of chosen contour levels, nor smoothness of contours. Overall, application of the different methods to the same maps leads to similar mean volume estimates, but the different methods show different dependencies and varying spread of volume estimates. The results indicate that these key parameters are less critically dependent on the operator and their choices of contour values, intervals etc., and more sensitive to the selection of technique to integrate these data.

  18. Fundamental Parameters Line Profile Fitting in Laboratory Diffractometers

    PubMed Central

    Cheary, R. W.; Coelho, A. A.; Cline, J. P.

    2004-01-01

    The fundamental parameters approach to line profile fitting uses physically based models to generate the line profile shapes. Fundamental parameters profile fitting (FPPF) has been used to synthesize and fit data from both parallel beam and divergent beam diffractometers. The refined parameters are determined by the diffractometer configuration. In a divergent beam diffractometer these include the angular aperture of the divergence slit, the width and axial length of the receiving slit, the angular apertures of the axial Soller slits, the length and projected width of the x-ray source, the absorption coefficient and axial length of the sample. In a parallel beam system the principal parameters are the angular aperture of the equatorial analyser/Soller slits and the angular apertures of the axial Soller slits. The presence of a monochromator in the beam path is normally accommodated by modifying the wavelength spectrum and/or by changing one or more of the axial divergence parameters. Flat analyzer crystals have been incorporated into FPPF as a Lorentzian shaped angular acceptance function. One of the intrinsic benefits of the fundamental parameters approach is its adaptability any laboratory diffractometer. Good fits can normally be obtained over the whole 20 range without refinement using the known properties of the diffractometer, such as the slit sizes and diffractometer radius, and emission profile. PMID:27366594

  19. Broadband spectral fitting of blazars using XSPEC

    NASA Astrophysics Data System (ADS)

    Sahayanathan, Sunder; Sinha, Atreyee; Misra, Ranjeev

    2018-03-01

    The broadband spectral energy distribution (SED) of blazars is generally interpreted as radiation arising from synchrotron and inverse Compton mechanisms. Traditionally, the underlying source parameters responsible for these emission processes, like particle energy density, magnetic field, etc., are obtained through simple visual reproduction of the observed fluxes. However, this procedure is incapable of providing confidence ranges for the estimated parameters. In this work, we propose an efficient algorithm to perform a statistical fit of the observed broadband spectrum of blazars using different emission models. Moreover, we use the observable quantities as the fit parameters, rather than the direct source parameters which govern the resultant SED. This significantly improves the convergence time and eliminates the uncertainty regarding initial guess parameters. This approach also has an added advantage of identifying the degenerate parameters, which can be removed by including more observable information and/or additional constraints. A computer code developed based on this algorithm is implemented as a user-defined routine in the standard X-ray spectral fitting package, XSPEC. Further, we demonstrate the efficacy of the algorithm by fitting the well sampled SED of blazar 3C 279 during its gamma ray flare in 2014.

  20. On the Complexity of Item Response Theory Models.

    PubMed

    Bonifay, Wes; Cai, Li

    2017-01-01

    Complexity in item response theory (IRT) has traditionally been quantified by simply counting the number of freely estimated parameters in the model. However, complexity is also contingent upon the functional form of the model. We examined four popular IRT models-exploratory factor analytic, bifactor, DINA, and DINO-with different functional forms but the same number of free parameters. In comparison, a simpler (unidimensional 3PL) model was specified such that it had 1 more parameter than the previous models. All models were then evaluated according to the minimum description length principle. Specifically, each model was fit to 1,000 data sets that were randomly and uniformly sampled from the complete data space and then assessed using global and item-level fit and diagnostic measures. The findings revealed that the factor analytic and bifactor models possess a strong tendency to fit any possible data. The unidimensional 3PL model displayed minimal fitting propensity, despite the fact that it included an additional free parameter. The DINA and DINO models did not demonstrate a proclivity to fit any possible data, but they did fit well to distinct data patterns. Applied researchers and psychometricians should therefore consider functional form-and not goodness-of-fit alone-when selecting an IRT model.

  1. Modeling the pressure inactivation of Escherichia coli and Salmonella typhimurium in sapote mamey ( Pouteria sapota (Jacq.) H.E. Moore & Stearn) pulp.

    PubMed

    Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto

    2018-03-01

    High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj  > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.

  2. NDSD-1000: High-resolution, high-temperature Nitrogen Dioxide Spectroscopic Databank

    NASA Astrophysics Data System (ADS)

    Lukashevskaya, A. A.; Lavrentieva, N. N.; Dudaryonok, A. C.; Perevalov, V. I.

    2016-11-01

    We present a high-resolution, high-temperature version of the Nitrogen Dioxide Spectroscopic Databank called NDSD-1000. The databank contains the line parameters (positions, intensities, self- and air-broadening coefficients, exponents of the temperature dependence of self- and air-broadening coefficients) of the principal isotopologue of NO2. The reference temperature for line intensity is 296 K and the intensity cutoff is 10-25 cm-1/molecule cm-2 at 1000 K. The broadening parameters are presented for two reference temperatures 296 K and 1000 K. The databank has 1,046,808 entries, covers five spectral regions in the 466-4776 cm-1 spectral range and is designed for temperatures up to 1000 K. The databank is based on the global modeling of the line positions and intensities performed within the framework of the method of effective operators. The parameters of the effective Hamiltonian and the effective dipole moment operator have been fitted to the observed values of the line positions and intensities collected from the literature. The broadening coefficients as well as the temperature exponents are calculated using the semi-empirical approach. The databank is useful for studying high-temperature radiative properties of NO2. NDSD-1000 is freely accessible via the internet site of V.E. Zuev Institute of Atmospheric Optics SB RAS ftp://ftp.iao.ru/pub/NDSD/.

  3. Forecasting the mortality rates of Malaysian population using Heligman-Pollard model

    NASA Astrophysics Data System (ADS)

    Ibrahim, Rose Irnawaty; Mohd, Razak; Ngataman, Nuraini; Abrisam, Wan Nur Azifah Wan Mohd

    2017-08-01

    Actuaries, demographers and other professionals have always been aware of the critical importance of mortality forecasting due to declining trend of mortality and continuous increases in life expectancy. Heligman-Pollard model was introduced in 1980 and has been widely used by researchers in modelling and forecasting future mortality. This paper aims to estimate an eight-parameter model based on Heligman and Pollard's law of mortality. Since the model involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 7.0 (MATLAB 7.0) software will be used in order to estimate the parameters. Statistical Package for the Social Sciences (SPSS) will be applied to forecast all the parameters according to Autoregressive Integrated Moving Average (ARIMA). The empirical data sets of Malaysian population for period of 1981 to 2015 for both genders will be considered, which the period of 1981 to 2010 will be used as "training set" and the period of 2011 to 2015 as "testing set". In order to investigate the accuracy of the estimation, the forecast results will be compared against actual data of mortality rates. The result shows that Heligman-Pollard model fit well for male population at all ages while the model seems to underestimate the mortality rates for female population at the older ages.

  4. Eye and Head Movement Characteristics in Free Visual Search of Flight-Simulator Imagery

    DTIC Science & Technology

    2010-03-01

    conspicuity. However, only gaze amplitude varied significantly with IFOV. A two- parameter (scale and exponent) power function was fitted to the...main-sequence amplitude-duration data. Both parameters varied significantly with target conspicuity, but in opposite directions. Neither parameter ...IFOV. A two- parameter (scale and exponent) power function was fitted to the main-sequence amplitude-duration data. Both parameters varied

  5. Precision modelling of M dwarf stars: the magnetic components of CM Draconis

    NASA Astrophysics Data System (ADS)

    MacDonald, J.; Mullan, D. J.

    2012-04-01

    The eclipsing binary CM Draconis (CM Dra) contains two nearly identical red dwarfs of spectral class dM4.5. The masses and radii of the two components have been reported with unprecedentedly small statistical errors: for M, these errors are 1 part in 260, while for R, the errors reported by Morales et al. are 1 part in 130. When compared with standard stellar models with appropriate mass and age (≈4 Gyr), the empirical results indicate that both components are discrepant from the models in the following sense: the observed stars are larger in R ('bloated'), by several standard deviations, than the models predict. The observed luminosities are also lower than the models predict. Here, we attempt at first to model the two components of CM Dra in the context of standard (non-magnetic) stellar models using a systematic array of different assumptions about helium abundances (Y), heavy element abundances (Z), opacities and mixing length parameter (α). We find no 4-Gyr-old models with plausible values of these four parameters that fit the observed L and R within the reported statistical error bars. However, CM Dra is known to contain magnetic fields, as evidenced by the occurrence of star-spots and flares. Here we ask: can inclusion of magnetic effects into stellar evolution models lead to fits of L and R within the error bars? Morales et al. have reported that the presence of polar spots results in a systematic overestimate of R by a few per cent when eclipses are interpreted with a standard code. In a star where spots cover a fraction f of the surface area, we find that the revised R and L for CM Dra A can be fitted within the error bars by varying the parameter α. The latter is often assumed to be reduced by the presence of magnetic fields, although the reduction in α as a function of B is difficult to quantify. An alternative magnetic effect, namely inhibition of the onset of convection, can be readily quantified in terms of a magnetic parameter δ≈B2/4πγpgas (where B is the strength of the local vertical magnetic field). In the context of δ models in which B is not allowed to exceed a 'ceiling' of 106 G, we find that the revised R and L can also be fitted, within the error bars, in a finite region of the f-δ plane. The permitted values of δ near the surface leads us to estimate that the vertical field strength on the surface of CM Dra A is about 500 G, in good agreement with independent observational evidence for similar low-mass stars. Recent results for another binary with parameters close to those of CM Dra suggest that metallicity differences cannot be the dominant explanation for the bloating of the two components of CM Dra.

  6. Performance of Ultrafast DCE-MRI for Diagnosis of Prostate Cancer.

    PubMed

    Chatterjee, Aritrick; He, Dianning; Fan, Xiaobing; Wang, Shiyang; Szasz, Teodora; Yousuf, Ambereen; Pineda, Federico; Antic, Tatjana; Mathew, Melvy; Karczmar, Gregory S; Oto, Aytekin

    2018-03-01

    This study aimed to test high temporal resolution dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) for different zones of the prostate and evaluate its performance in the diagnosis of prostate cancer (PCa). Determine whether the addition of ultrafast DCE-MRI improves the performance of multiparametric MRI. Patients (n = 20) with pathologically confirmed PCa underwent preoperative 3T MRI with T2-weighted, diffusion-weighted, and high temporal resolution (~2.2 seconds) DCE-MRI using gadoterate meglumine (Guerbet, Bloomington, IN) without an endorectal coil. DCE-MRI data were analyzed by fitting signal intensity with an empirical mathematical model to obtain parameters: percent signal enhancement, enhancement rate (α), washout rate (β), initial enhancement slope, and enhancement start time along with apparent diffusion coefficient (ADC) and T2 values. Regions of interests were placed on sites of prostatectomy verified malignancy (n = 46) and normal tissue (n = 71) from different zones. Cancer (α = 6.45 ± 4.71 s -1 , β = 0.067 ± 0.042 s -1 , slope = 3.78 ± 1.90 s -1 ) showed significantly (P <.05) faster signal enhancement and washout rates than normal tissue (α = 3.0 ± 2.1 s -1 , β = 0.034 ± 0.050 s -1 , slope = 1.9 ± 1.4 s -1 ), but showed similar percentage signal enhancement and enhancement start time. Receiver operating characteristic analysis showed area under the curve for DCE parameters was comparable to ADC and T2 in the peripheral (DCE 0.67-0.82, ADC 0.80, T2 0.89) and transition zones (DCE 0.61-0.72, ADC 0.69, T2 0.75), but higher in the central zone (DCE 0.79-0.88, ADC 0.45, T2 0.45) and anterior fibromuscular stroma (DCE 0.86-0.89, ADC 0.35, T2 0.12). Importantly, combining DCE with ADC and T2 increased area under the curve by ~30%, further improving the diagnostic accuracy of PCa detection. Quantitative parameters from empirical mathematical model fits to ultrafast DCE-MRI improve diagnosis of PCa. DCE-MRI with higher temporal resolution may capture clinically useful information for PCa diagnosis that would be missed by low temporal resolution DCE-MRI. This new information could improve the performance of multiparametric MRI in PCa detection. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  7. Subzero water permeability parameters and optimal freezing rates for sperm cells of the southern platyfish, Xiphophorus maculatus.

    PubMed

    Pinisetty, D; Huang, C; Dong, Q; Tiersch, T R; Devireddy, R V

    2005-06-01

    This study reports the subzero water transport characteristics (and empirically determined optimal rates for freezing) of sperm cells of live-bearing fishes of the genus Xiphophorus, specifically those of the southern platyfish Xiphophorus maculatus. These fishes are valuable models for biomedical research and are commercially raised as ornamental fish for use in aquariums. Water transport during freezing of X. maculatus sperm cell suspensions was obtained using a shape-independent differential scanning calorimeter technique in the presence of extracellular ice at a cooling rate of 20 degrees C/min in three different media: (1) Hanks' balanced salt solution (HBSS) without cryoprotective agents (CPAs); (2) HBSS with 14% (v/v) glycerol, and (3) HBSS with 10% (v/v) dimethyl sulfoxide (DMSO). The sperm cell was modeled as a cylinder with a length of 52.35 microm and a diameter of 0.66 microm with an osmotically inactive cell volume (Vb) of 0.6 V0, where V0 is the isotonic or initial cell volume. This translates to a surface area, SA to initial water volume, WV ratio of 15.15 microm(-1). By fitting a model of water transport to the experimentally determined volumetric shrinkage data, the best fit membrane permeability parameters (reference membrane permeability to water at 0 degrees C, Lpg or Lpg [cpa] and the activation energy, E(Lp) or E(Lp) [cpa]) were found to range from: Lpg or Lpg [cpa] = 0.0053-0.0093 microm/minatm; E(Lp) or E(Lp) [cpa] = 9.79-29.00 kcal/mol. By incorporating these membrane permeability parameters in a recently developed generic optimal cooling rate equation (optimal cooling rate, [Formula: see text] where the units of B(opt) are degrees C/min, E(Lp) or E(Lp) [cpa] are kcal/mol, L(pg) or L(pg) [cpa] are microm/minatm and SA/WV are microm(-1)), we determined the optimal rates of freezing X. maculatus sperm cells to be 28 degrees C/min (in HBSS), 47 degrees C/min (in HBSS+14% glycerol) and 36 degrees C/min (in HBSS+10% DMSO). Preliminary empirical experiments suggest that the optimal rate of freezing X. maculatus sperm in the presence of 14% glycerol to be approximately 25 degrees C/min. Possible reasons for the observed discrepancy between the theoretically predicted and experimentally determined optimal rates of freezing X. maculatus sperm cells are discussed.

  8. The species-area relationship, self-similarity, and the true meaning of the z-value.

    PubMed

    Tjørve, Even; Tjørve, Kathleen M Calf

    2008-12-01

    The power model, S= cA(z) (where S is number of species, A is area, and c and z are fitted constants), is the model most commonly fitted to species-area data assessing species diversity. We use the self-similarity properties of this model to reveal patterns implicated by the z parameter. We present the basic arithmetic leading both to the fraction of new species added when two areas are combined and to species overlap between two areas of the same size, given a continuous sampling scheme. The fraction of new species resulting from expansion of an area can be expressed as alpha(z)-1, where alpha is the expansion factor. Consequently, z-values can be converted to a scale-invariant species overlap between two equally sized areas, since the proportion of species in common between the two areas is 2-2(z). Calculating overlap when adding areas of the same size reveals the intrinsic effect of distance assumed by the bisectional scheme. We use overlap area relationships from empirical data sets to illustrate how answers to the single large or several small reserves (SLOSS) question vary between data sets and with scale. We conclude that species overlap and the effect of distance between sample areas or isolates should be addressed when discussing species area relationships, and lack of fit to the power model can be caused by its assumption of a scale-invariant overlap relationship.

  9. Scattering analysis of LOFAR pulsar observations

    NASA Astrophysics Data System (ADS)

    Geyer, M.; Karastergiou, A.; Kondratiev, V. I.; Zagkouris, K.; Kramer, M.; Stappers, B. W.; Grießmeier, J.-M.; Hessels, J. W. T.; Michilli, D.; Pilia, M.; Sobey, C.

    2017-09-01

    We measure the effects of interstellar scattering on average pulse profiles from 13 radio pulsars with simple pulse shapes. We use data from the LOFAR High Band Antennas, at frequencies between 110 and 190 MHz. We apply a forward fitting technique, and simultaneously determine the intrinsic pulse shape, assuming single Gaussian component profiles. We find that the constant τ, associated with scattering by a single thin screen, has a power-law dependence on frequency τ ∝ ν-α, with indices ranging from α = 1.50 to 4.0, despite simplest theoretical models predicting α = 4.0 or 4.4. Modelling the screen as an isotropic or extremely anisotropic scatterer, we find anisotropic scattering fits lead to larger power-law indices, often in better agreement with theoretically expected values. We compare the scattering models based on the inferred, frequency-dependent parameters of the intrinsic pulse, and the resulting correction to the dispersion measure (DM). We highlight the cases in which fits of extreme anisotropic scattering are appealing, while stressing that the data do not strictly favour either model for any of the 13 pulsars. The pulsars show anomalous scattering properties that are consistent with finite scattering screens and/or anisotropy, but these data alone do not provide the means for an unambiguous characterization of the screens. We revisit the empirical τ versus DM relation and consider how our results support a frequency dependence of α. Very long baseline interferometry, and observations of the scattering and scintillation properties of these sources at higher frequencies, will provide further evidence.

  10. AN EMPIRICAL METHOD FOR IMPROVING THE QUALITY OF RXTE HEXTE SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Javier A.; Steiner, James F.; McClintock, Jeffrey E.

    2016-03-01

    We have developed a correction tool to improve the quality of Rossi X-ray Timing Explorer (RXTE) High Energy X-ray Timing Experiment (HEXTE) spectra by employing the same method we used earlier to improve the quality of RXTE Proportional Counter Array (PCA) spectra. We fit all of the hundreds of HEXTE spectra of the Crab individually to a simple power-law model, some 37 million counts in total for Cluster A and 39 million counts for Cluster B, and we create for each cluster a combined spectrum of residuals. We find that the residual spectrum of Cluster A is free of instrumental artifacts while that of Clustermore » B contains significant features with amplitudes ∼1%; the most prominent is in the energy range 30–50 keV, which coincides with the iodine K edge. Starting with the residual spectrum for Cluster B, via an iterative procedure we created the calibration tool hexBcorr for correcting any Cluster B spectrum of interest. We demonstrate the efficacy of the tool by applying it to Cluster B spectra of two bright black holes, which contain several million counts apiece. For these spectra, application of the tool significantly improves the goodness of fit, while affecting only slightly the broadband fit parameters. The tool may be important for the study of spectral features, such as cyclotron lines, a topic that is beyond the scope of this paper.« less

  11. Comparison of dynamic contrast-enhanced MRI parameters of breast lesions at 1.5 and 3.0 T: a pilot study

    PubMed Central

    Pineda, F D; Medved, M; Fan, X; Ivancevic, M K; Abe, H; Shimauchi, A; Newstead, G M

    2015-01-01

    Objective: To compare dynamic contrast-enhanced (DCE) MRI parameters from scans of breast lesions at 1.5 and 3.0 T. Methods: 11 patients underwent paired MRI examinations in both Philips 1.5 and 3.0 T systems (Best, Netherlands) using a standard clinical fat-suppressed, T1 weighted DCE-MRI protocol, with 70–76 s temporal resolution. Signal intensity vs time curves were fit with an empirical mathematical model to obtain semi-quantitative measures of uptake and washout rates as well as time-to-peak enhancement (TTP). Maximum percent enhancement and signal enhancement ratio (SER) were also measured for each lesion. Percent differences between parameters measured at the two field strengths were compared. Results: TTP and SER parameters measured at 1.5 and 3.0 T were similar; with mean absolute differences of 19% and 22%, respectively. Maximum percent signal enhancement was significantly higher at 3 T than at 1.5 T (p = 0.006). Qualitative assessment showed that image quality was significantly higher at 3 T (p = 0.005). Conclusion: Our results suggest that TTP and SER are more robust to field strength change than other measured kinetic parameters, and therefore measurements of these parameters can be more easily standardized than measurements of other parameters derived from DCE-MRI. Semi-quantitative measures of overall kinetic curve shape showed higher reproducibility than do discrete classification of kinetic curve early and delayed phases in a majority of the cases studied. Advances in knowledge: Qualitative measures of curve shape are not consistent across field strength even when acquisition parameters are standardized. Quantitative measures of overall kinetic curve shape, by contrast, have higher reproducibility. PMID:25785918

  12. The heuristic value of redundancy models of aging.

    PubMed

    Boonekamp, Jelle J; Briga, Michael; Verhulst, Simon

    2015-11-01

    Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links. To this end, we explore the heuristic value of redundancy models of aging to develop a deeper insight into the mechanisms causing variation in senescence and lifespan. We start by showing (i) how different redundancy model parameters affect projected aging and mortality, and (ii) how variation in redundancy model parameters relates to variation in parameters of the Gompertz equation. Lifestyle changes or medical interventions during life can modify mortality rate, and we investigate (iii) how interventions that change specific redundancy parameters within the model affect subsequent mortality and actuarial senescence. Lastly, as an example of data-directed modelling and the insights that can be gained from this, (iv) we fit a redundancy model to mortality patterns observed by Mair et al. (2003; Science 301: 1731-1733) in Drosophila that were subjected to dietary restriction and temperature manipulations. Mair et al. found that dietary restriction instantaneously reduced mortality rate without affecting aging, while temperature manipulations had more transient effects on mortality rate and did affect aging. We show that after adjusting model parameters the redundancy model describes both effects well, and a comparison of the parameter values yields a deeper insight in the mechanisms causing these contrasting effects. We see replacement of the redundancy model parameters by more detailed sub-models of these parameters as a next step in linking demographic patterns to underlying molecular mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Parametrization of free ion levels of four isoelectronic 4f2 systems: Insights into configuration interaction parameters

    NASA Astrophysics Data System (ADS)

    Yeung, Yau Yuen; Tanner, Peter A.

    2013-12-01

    The experimental free ion 4f2 energy level data sets comprising 12 or 13 J-multiplets of La+, Ce2+, Pr3+ and Nd4+ have been fitted by a semiempirical atomic Hamiltonian comprising 8, 10, or 12 freely-varying parameters. The root mean square errors were 16.1, 1.3, 0.3 and 0.3 cm-1, respectively for fits with 10 parameters. The fitted inter-electronic repulsion and magnetic parameters vary linearly with ionic charge, i, but better linear fits are obtained with (4-i)2, although the reason is unclear at present. The two-body configuration interaction parameters α and β exhibit a linear relation with [ΔE(bc)]-1, where ΔE(bc) is the energy difference between the 4f2 barycentre and that of the interacting configuration, namely 4f6p for La+, Ce2+, and Pr3+, and 5p54f3 for Nd4+. The linear fit provides the rationale for the negative value of α for the case of La+, where the interacting configuration is located below 4f2.

  14. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  15. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  16. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  17. Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Bonta, Dacian Viorel

    Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.

  18. Efficient computation of significance levels for multiple associations in large studies of correlated data, including genomewide association studies.

    PubMed

    Dudbridge, Frank; Koeleman, Bobby P C

    2004-09-01

    Large exploratory studies, including candidate-gene-association testing, genomewide linkage-disequilibrium scans, and array-expression experiments, are becoming increasingly common. A serious problem for such studies is that statistical power is compromised by the need to control the false-positive rate for a large family of tests. Because multiple true associations are anticipated, methods have been proposed that combine evidence from the most significant tests, as a more powerful alternative to individually adjusted tests. The practical application of these methods is currently limited by a reliance on permutation testing to account for the correlated nature of single-nucleotide polymorphism (SNP)-association data. On a genomewide scale, this is both very time-consuming and impractical for repeated explorations with standard marker panels. Here, we alleviate these problems by fitting analytic distributions to the empirical distribution of combined evidence. We fit extreme-value distributions for fixed lengths of combined evidence and a beta distribution for the most significant length. An initial phase of permutation sampling is required to fit these distributions, but it can be completed more quickly than a simple permutation test and need be done only once for each panel of tests, after which the fitted parameters give a reusable calibration of the panel. Our approach is also a more efficient alternative to a standard permutation test. We demonstrate the accuracy of our approach and compare its efficiency with that of permutation tests on genomewide SNP data released by the International HapMap Consortium. The estimation of analytic distributions for combined evidence will allow these powerful methods to be applied more widely in large exploratory studies.

  19. Cosmological structure formation in Decaying Dark Matter models

    NASA Astrophysics Data System (ADS)

    Cheng, Dalong; Chu, M.-C.; Tang, Jiayu

    2015-07-01

    The standard cold dark matter (CDM) model predicts too many and too dense small structures. We consider an alternative model that the dark matter undergoes two-body decays with cosmological lifetime τ into only one type of massive daughters with non-relativistic recoil velocity Vk. This decaying dark matter model (DDM) can suppress the structure formation below its free-streaming scale at time scale comparable to τ. Comparing with warm dark matter (WDM), DDM can better reduce the small structures while being consistent with high redshfit observations. We study the cosmological structure formation in DDM by performing self-consistent N-body simulations and point out that cosmological simulations are necessary to understand the DDM structures especially on non-linear scales. We propose empirical fitting functions for the DDM suppression of the mass function and the concentration-mass relation, which depend on the decay parameters lifetime τ, recoil velocity Vk and redshift. The fitting functions lead to accurate reconstruction of the the non-linear power transfer function of DDM to CDM in the framework of halo model. Using these results, we set constraints on the DDM parameter space by demanding that DDM does not induce larger suppression than the Lyman-α constrained WDM models. We further generalize and constrain the DDM models to initial conditions with non-trivial mother fractions and show that the halo model predictions are still valid after considering a global decayed fraction. Finally, we point out that the DDM is unlikely to resolve the disagreement on cluster numbers between the Planck primary CMB prediction and the Sunyaev-Zeldovich (SZ) effect number count for τ ~ H0-1.

  20. From field data to volumes: constraining uncertainties in pyroclastic eruption parameters

    NASA Astrophysics Data System (ADS)

    Klawonn, Malin; Houghton, Bruce F.; Swanson, Donald A.; Fagents, Sarah A.; Wessel, Paul; Wolfe, Cecily J.

    2014-07-01

    In this study, we aim to understand the variability in eruption volume estimates derived from field studies of pyroclastic deposits. We distributed paper maps of the 1959 Kīlauea Iki tephra to 101 volcanologists worldwide, who produced hand-drawn isopachs. Across the returned maps, uncertainty in isopach areas is 7 % across the well-sampled deposit but increases to over 30 % for isopachs that are governed by the largest and smallest thickness measurements. We fit the exponential, power-law, and Weibull functions through the isopach thickness versus area1/2 values and find volume estimate variations up to a factor of 4.9 for a single map. Across all maps and methodologies, we find an average standard deviation for a total volume of s = 29 %. The volume uncertainties are largest for the most proximal ( s = 62 %) and distal field ( s = 53 %) and small for the densely sampled intermediate deposit ( s = 8 %). For the Kīlauea Iki 1959 eruption, we find that the deposit beyond the 5-cm isopach contains only 2 % of the total erupted volume, whereas the near-source deposit contains 48 % and the intermediate deposit 50 % of the total volume. Thus, the relative uncertainty within each zone impacts the total volume estimates differently. The observed uncertainties for the different deposit regions in this study illustrate a fundamental problem of estimating eruption volumes: while some methodologies may provide better fits to the isopach data or rely on fewer free parameters, the main issue remains the predictive capabilities of the empirical functions for the regions where measurements are missing.

  1. FAMIAS - A userfriendly new software tool for the mode identification of photometric and spectroscopic times series

    NASA Astrophysics Data System (ADS)

    Zima, W.

    2008-12-01

    FAMIAS (Frequency Analysis and Mode Identification for AsteroSeismology) is a collection of state-of-the-art software tools for the analysis of photometric and spectroscopic time series data. It is one of the deliverables of the Work Package NA5: Asteroseismology of the European Coordination Action in Helio- and Asteroseismology (HELAS1 ). Two main sets of tools are incorporated in FAMIAS. The first set allows to search for pe- riodicities in the data using Fourier and non-linear least-squares fitting algorithms. The other set allows to carry out a mode identification for the detected pulsation frequencies to deter- mine their pulsational quantum numbers, the harmonic degree, ℓ, and the azimuthal order, m. For the spectroscopic mode identification, the Fourier parameter fit method and the moment method are available. The photometric mode identification is based on pre-computed grids of atmospheric parameters and non-adiabatic observables, and uses the method of amplitude ratios and phase differences in different filters. The types of stars to which FAMIAS is appli- cable are main-sequence pulsators hotter than the Sun. This includes the Gamma Dor stars, Delta Sct stars, the slowly pulsating B stars and the Beta Cep stars - basically all pulsating main-sequence stars, for which empirical mode identification is required to successfully carry out asteroseismology. The complete manual for FAMIAS is published in a special issue of Communications in Asteroseismology, Vol 155. The homepage of FAMIAS2 provides the possibility to download the software and to read the on-line documentation.

  2. Simultaneous fits in ISIS on the example of GRO J1008-57

    NASA Astrophysics Data System (ADS)

    Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern

    2015-04-01

    Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.

  3. Low-Order Modeling of Dynamic Stall on Airfoils in Incompressible Flow

    NASA Astrophysics Data System (ADS)

    Narsipur, Shreyas

    Unsteady aerodynamics has been a topic of research since the late 1930's and has increased in popularity among researchers studying dynamic stall in helicopters, insect/bird flight, micro air vehicles, wind-turbine aerodynamics, and ow-energy harvesting devices. Several experimental and computational studies have helped researchers gain a good understanding of the unsteady ow phenomena, but have proved to be expensive and time-intensive for rapid design and analysis purposes. Since the early 1970's, the push to develop low-order models to solve unsteady ow problems has resulted in several semi-empirical models capable of effectively analyzing unsteady aerodynamics in a fraction of the time required by high-order methods. However, due to the various complexities associated with time-dependent flows, several empirical constants and curve fits derived from existing experimental and computational results are required by the semi-empirical models to be an effective analysis tool. The aim of the current work is to develop a low-order model capable of simulating incompressible dynamic-stall type ow problems with a focus on accurately modeling the unsteady ow physics with the aim of reducing empirical dependencies. The lumped-vortex-element (LVE) algorithm is used as the baseline unsteady inviscid model to which augmentations are applied to model unsteady viscous effects. The current research is divided into two phases. The first phase focused on augmentations aimed at modeling pure unsteady trailing-edge boundary-layer separation and stall without leading-edge vortex (LEV) formation. The second phase is targeted at including LEV shedding capabilities to the LVE algorithm and combining with the trailing-edge separation model from phase one to realize a holistic, optimized, and robust low-order dynamic stall model. In phase one, initial augmentations to theory were focused on modeling the effects of steady trailing-edge separation by implementing a non-linear decambering flap to model the effect of the separated boundary-layer. Unsteady RANS results for several pitch and plunge motions showed that the differences in aerodynamic loads between steady and unsteady flows can be attributed to the boundary-layer convection lag, which can be modeled by choosing an appropriate value of the time lag parameter, tau2. In order to provide appropriate viscous corrections to inviscid unsteady calculations, the non-linear decambering flap is applied with a time lag determined by the tau2 value, which was found to be independent of motion kinematics for a given airfoil and Reynolds number. The predictions of the aerodynamic loads, unsteady stall, hysteresis loops, and ow reattachment from the low-order model agree well with CFD and experimental results, both for individual cases and for trends between motions. The model was also found to perform as well as existing semi-empirical models while using only a single empirically defined parameter. Inclusion of LEV shedding capabilities and combining the resulting algorithm with phase one's trailing-edge separation model was the primary objective of phase two. Computational results at low and high Reynolds numbers were used to analyze the ow morphology of the LEV to identify the common surface signature associated with LEV initiation at both low and high Reynolds numbers and relate it to the critical leading-edge suction parameter (LESP ) to control the initiation and termination of LEV shedding in the low-order model. The critical LESP, like the tau2 parameter, was found to be independent of motion kinematics for a given airfoil and Reynolds number. Results from the final low-order model compared excellently with CFD and experimental solutions, both in terms of aerodynamic loads and vortex ow pattern predictions. Overall, the final combined dynamic stall model that resulted from the current research was successful in accurately modeling the physics of unsteady ow thereby helping restrict the number of empirical coefficients to just two variables while successfully modeling the aerodynamic forces and ow patterns in a simple and precise manner.

  4. Modelling the Factors that Affect Individuals' Utilisation of Online Learning Systems: An Empirical Study Combining the Task Technology Fit Model with the Theory of Planned Behaviour

    ERIC Educational Resources Information Center

    Yu, Tai-Kuei; Yu, Tai-Yi

    2010-01-01

    Understanding learners' behaviour, perceptions and influence in terms of learner performance is crucial to predict the use of electronic learning systems. By integrating the task-technology fit (TTF) model and the theory of planned behaviour (TPB), this paper investigates the online learning utilisation of Taiwanese students. This paper provides a…

  5. An empirical approach to the stopping power of solids and gases for ions from 3Li to 18Ar - Part II

    NASA Astrophysics Data System (ADS)

    Paul, Helmut; Schinner, Andreas

    2002-10-01

    This paper is a continuation of the work presented in Nucl. Instr. and Meth. Phys. Res. B 179 (2001) 299. Its aim is to produce a table of stopping powers by fitting empirical stopping values. Our database has been increased and we use a better fit function. As before, we treat solid and gaseous targets separately, but we now obtain results also for H 2 and He targets. Using an improved version of our program MSTAR, we can calculate the stopping power for any ion (3⩽ Z1⩽18) at specific energies from 0.001 to 1000 MeV/nucleon and for any element, mixture or compound contained in ICRU Report 49. MSTAR is available in the internet; it can be used as stand alone or built as a subroutine into other programs. Using a statistical program for comparing our fits with the experimental data, we find that MSTAR represents the data within 2% at high energy and within up to 20% (25% for gases) at the lowest energies. Fitting errors are 40-110% larger than experimental errors given by the authors. For some gas targets, MSTAR describes the data better than Ziegler's program TRIM.

  6. FAST: Fitting and Assessment of Synthetic Templates

    NASA Astrophysics Data System (ADS)

    Kriek, Mariska; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; Illingworth, Garth D.; Marchesini, Danilo; Quadri, Ryan F.; Aird, James; Coil, Alison L.; Georgakakis, Antonis

    2018-03-01

    FAST (Fitting and Assessment of Synthetic Templates) fits stellar population synthesis templates to broadband photometry and/or spectra. FAST is compatible with the photometric redshift code EAzY (ascl:1010.052) when fitting broadband photometry; it uses the photometric redshifts derived by EAzY, and the input files (for examply, photometric catalog and master filter file) are the same. FAST fits spectra in combination with broadband photometric data points or simultaneously fits two components, allowing for an AGN contribution in addition to the host galaxy light. Depending on the input parameters, FAST outputs the best-fit redshift, age, dust content, star formation timescale, metallicity, stellar mass, star formation rate (SFR), and their confidence intervals. Though some of FAST's functions overlap with those of HYPERZ (ascl:1108.010), it differs by fitting fluxes instead of magnitudes, allows the user to completely define the grid of input stellar population parameters and easily input photometric redshifts and their confidence intervals, and calculates calibrated confidence intervals for all parameters. Note that FAST is not a photometric redshift code, though it can be used as one.

  7. Assessing posttraumatic stress disorder's latent structure in elderly bereaved European trauma survivors: evidence for a five-factor dysphoric and anxious arousal model.

    PubMed

    Armour, Cherie; O'Connor, Maja; Elklit, Ask; Elhai, Jon D

    2013-10-01

    The three-factor structure of posttraumatic stress disorder (PTSD) specified by the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, is not supported in the empirical literature. Two alternative four-factor models have received a wealth of empirical support. However, a consensus regarding which is superior has not been reached. A recent five-factor model has been shown to provide superior fit over the existing four-factor models. The present study investigated the fit of the five-factor model against the existing four-factor models and assessed the resultant factors' association with depression in a bereaved European trauma sample (N = 325). The participants were assessed for PTSD via the Harvard Trauma Questionnaire and depression via the Beck Depression Inventory. The five-factor model provided superior fit to the data compared with the existing four-factor models. In the dysphoric arousal model, depression was equally related to both dysphoric arousal and emotional numbing, whereas depression was more related to dysphoric arousal than to anxious arousal.

  8. Quantification of variability and uncertainty for air toxic emission inventories with censored emission factor data.

    PubMed

    Frey, H Christopher; Zhao, Yuchao

    2004-11-15

    Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.

  9. Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models

    NASA Astrophysics Data System (ADS)

    Van Houtte, Chris; Denolle, Marine

    2018-04-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.

  10. Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions

    NASA Astrophysics Data System (ADS)

    De Risi, Raffaele; Goda, Katsuichiro

    2017-08-01

    Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.

  11. Scenario analysis of freight vehicle accident risks in Taiwan.

    PubMed

    Tsai, Ming-Chih; Su, Chien-Chih

    2004-07-01

    This study develops a quantitative risk model by utilizing Generalized Linear Interactive Model (GLIM) to analyze the major freight vehicle accidents in Taiwan. Eight scenarios are established by interacting three categorical variables of driver ages, vehicle types and road types, each of which contains two levels. The database that consists of 2043 major accidents occurring between 1994 and 1998 in Taiwan is utilized to fit and calibrate the model parameters. The empirical results indicate that accident rates of freight vehicles in Taiwan were high in the scenarios involving trucks and non-freeway systems, while; accident consequences were severe in the scenarios involving mature drivers or non-freeway systems. Empirical evidences also show that there is no significant relationship between accident rates and accident consequences. This is to stress that safety studies that describe risk merely as accident rates rather than the combination of accident rates and consequences by definition might lead to biased risk perceptions. Finally, the study recommends using number of vehicle as an alternative of traffic exposure in commercial vehicle risk analysis. The merits of this would be that it is simple and thus reliable; meanwhile, the resulted risk that is termed as fatalities per vehicle could provide clear and direct policy implications for insurance practices and safety regulations.

  12. Nature of the Congested Traffic and Quasi-steady States of the General Motor Models

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Xu, Xihua; Pang, John Z. F.; Monterola, Christopher

    2015-03-01

    We look at the general motor (GM) class microscopic traffic models and analyze some of the universal features of the (multi-)cluster solutions, including the emergence of an intrinsic scale and the quasisoliton dynamics. We show that the GM models can capture the essential physics of the real traffic dynamics, especially the phase transition from the free flow to the congested phase, from which the wide moving jams emerges (the F-S-J transition pioneered by B.S. Kerner). In particular, the congested phase can be associated with either the multi-cluster quasi-steady states, or their more homogeneous precursor states. In both cases the states can last for a long time, and the narrow clusters will eventually grow and merge, leading to the formation of the wide moving jams. We present a general method to fit the empirical parameters so that both quantitative and qualitative macroscopic empirical features can be reproduced with a minimal GM model. We present numerical results for the traffic dynamics both with and without the bottleneck, including various types of spontaneous and induced ``synchronized flow,'' as well as the evolution of wide moving jams. We also discuss its implications to the nature of different phases in traffic dynamics.

  13. Anisotropy of the Fermi surface, Fermi velocity, many-body enhancement, and superconducting energy gap in Nb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crabtree, G.W.; Dye, D.H.; Karim, D.P.

    1987-02-01

    The detailed angular dependence of the Fermi radius k/sub F/, the Fermi velocity v/sub F/(k), the many-body enhancement factor lambda(k), and the superconducting energy gap ..delta..(k), for electrons on the Fermi surface of Nb are derived with use of the de Haas--van Alphen (dHvA) data of Karim, Ketterson, and Crabtree (J. Low Temp. Phys. 30, 389 (1978)), a Korringa-Kohn-Rostoker parametrization scheme, and an empirically adjusted band-structure calculation of Koelling. The parametrization is a nonrelativistic five-parameter fit allowing for cubic rather than spherical symmetry inside the muffin-tin spheres. The parametrized Fermi surface gives a detailed interpretation of the previously unexplained kappa,more » ..cap alpha..', and ..cap alpha..'' orbits in the dHvA data. Comparison of the parametrized Fermi velocities with those of the empirically adjusted band calculation allow the anisotropic many-body enhancement factor lambda(k) to be determined. Theoretical calculations of the electron-phonon interaction based on the tight-binding model agree with our derived values of lambda(k) much better than those based on the rigid-muffin-tin approximation. The anisotropy in the superconducting energy gap ..delta..(k) is estimated from our results for lambda(k), assuming weak anisotropy.« less

  14. Anisotropy of the Fermi surface, Fermi velocity, many-body enhancement, and superconducting energy gap in Nb

    NASA Astrophysics Data System (ADS)

    Crabtree, G. W.; Dye, D. H.; Karim, D. P.; Campbell, S. A.; Ketterson, J. B.

    1987-02-01

    The detailed angular dependence of the Fermi radius kF, the Fermi velocity vF(k), the many-body enhancement factor λ(k), and the superconducting energy gap Δ(k), for electrons on the Fermi surface of Nb are derived with use of the de Haas-van Alphen (dHvA) data of Karim, Ketterson, and Crabtree [J. Low Temp. Phys. 30, 389 (1978)], a Korringa-Kohn-Rostoker parametrization scheme, and an empirically adjusted band-structure calculation of Koelling. The parametrization is a nonrelativistic five-parameter fit allowing for cubic rather than spherical symmetry inside the muffin-tin spheres. The parametrized Fermi surface gives a detailed interpretation of the previously unexplained κ, α', and α'' orbits in the dHvA data. Comparison of the parametrized Fermi velocities with those of the empirically adjusted band calculation allow the anisotropic many-body enhancement factor λ(k) to be determined. Theoretical calculations of the electron-phonon interaction based on the tight-binding model agree with our derived values of λ(k) much better than those based on the rigid-muffin-tin approximation. The anisotropy in the superconducting energy gap Δ(k) is estimated from our results for λ(k), assuming weak anisotropy.

  15. Estimation of hectare-scale soil-moisture characteristics from aquifer-test data

    USGS Publications Warehouse

    Moench, A.F.

    2003-01-01

    Analysis of a 72-h, constant-rate aquifer test conducted in a coarse-grained and highly permeable, glacial outwash deposit on Cape Cod, Massachusetts revealed that drawdowns measured in 20 piezometers located at various depths below the water table and distances from the pumped well were significantly influenced by effects of drainage from the vadose zone. The influence was greatest in piezometers located close to the water table and diminished with increasing depth. The influence of the vadose zone was evident from a gap, in the intermediate-time zone, between measured drawdowns and drawdowns computed under the assumption that drainage from the vadose zone occurred instantaneously in response to a decline in the elevation of the water table. By means of an analytical model that was designed to account for time-varying drainage, simulated drawdowns could be closely fitted to measured drawdowns regardless of the piezometer locations. Because of the exceptional quality and quantity of the data and the relatively small aquifer heterogeneity, it was possible by inverse modeling to estimate all relevant aquifer parameters and a set of three empirical constants used in the upper-boundary condition to account for the dynamic drainage process. The empirical constants were used to define a one-dimensional (ID) drainage versus time curve that is assumed to be representative of the bulk material overlying the water table. The curve was inverted with a parameter estimation algorithm and a ID numerical model for variably saturated flow to obtain soil-moisture retention curves and unsaturated hydraulic conductivity relationships defined by the Brooks and Corey equations. Direct analysis of the aquifer-test data using a parameter estimation algorithm and a two-dimensional, axisymmetric numerical model for variably saturated flow yielded similar soil-moisture characteristics. Results suggest that hectare-scale soil-moisture characteristics are different from core-scale predictions and even relatively small amounts of fine-grained material and heterogeneity can dominate the large-scale soil-moisture characteristics and aquifer response. ?? 2003 Elsevier B.V. All rights reserved.

  16. Numerical development of a new correlation between biaxial fracture strain and material fracture toughness for small punch test

    NASA Astrophysics Data System (ADS)

    Kumar, Pradeep; Dutta, B. K.; Chattopadhyay, J.

    2017-04-01

    The miniaturized specimens are used to determine mechanical properties of the materials, such as yield stress, ultimate stress, fracture toughness etc. Use of such specimens is essential whenever limited quantity of material is available for testing, such as aged/irradiated materials. The miniaturized small punch test (SPT) is a technique which is widely used to determine change in mechanical properties of the materials. Various empirical correlations are proposed in the literature to determine the value of fracture toughness (JIC) using this technique. bi-axial fracture strain is determined using SPT tests. This parameter is then used to determine JIC using available empirical correlations. The correlations between JIC and biaxial fracture strain quoted in the literature are based on experimental data acquired for large number of materials. There are number of such correlations available in the literature, which are generally not in agreement with each other. In the present work, an attempt has been made to determine the correlation between biaxial fracture strain (εqf) and crack initiation toughness (Ji) numerically. About one hundred materials are digitally generated by varying yield stress, ultimate stress, hardening coefficient and Gurson parameters. Such set of each material is then used to analyze a SPT specimen and a standard TPB specimen. Analysis of SPT specimen generated biaxial fracture strain (εqf) and analysis of TPB specimen generated value of Ji. A graph is then plotted between these two parameters for all the digitally generated materials. The best fit straight line determines the correlation. It has been also observed that it is possible to have variation in Ji for the same value of biaxial fracture strain (εqf) within a limit. Such variation in the value of Ji has been also ascertained using the graph. Experimental SPT data acquired earlier for three materials were then used to get Ji by using newly developed correlation. A reasonable comparison of calculated Ji with the values quoted in literature confirmed usefulness of the correlation.

  17. Prediction of Strong Earthquake Ground Motion for the M=7.4 and M=7.2 1999, Turkey Earthquakes based upon Geological Structure Modeling and Local Earthquake Recordings

    NASA Astrophysics Data System (ADS)

    Gok, R.; Hutchings, L.

    2004-05-01

    We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

  18. Approximate scaling properties of RNA free energy landscapes

    NASA Technical Reports Server (NTRS)

    Baskaran, S.; Stadler, P. F.; Schuster, P.

    1996-01-01

    RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.

  19. Semi-empirical device model for Cu2ZnSn(S,Se)4 solar cells

    NASA Astrophysics Data System (ADS)

    Gokmen, Tayfun; Gunawan, Oki; Mitzi, David B.

    2014-07-01

    We present a device model for the hydrazine processed kesterite Cu2ZnSn(S,Se)4 (CZTSSe) solar cell with a world record efficiency of ˜12.6%. Detailed comparison of the simulation results, performed using wxAMPS software, to the measured device parameters shows that our model captures the vast majority of experimental observations, including VOC, JSC, FF, and efficiency under normal operating conditions, and temperature vs. VOC, sun intensity vs. VOC, and quantum efficiency. Moreover, our model is consistent with material properties derived from various techniques. Interestingly, this model does not have any interface defects/states, suggesting that all the experimentally observed features can be accounted for by the bulk properties of CZTSSe. An electrical (mobility) gap that is smaller than the optical gap is critical to fit the VOC data. These findings point to the importance of tail states in CZTSSe solar cells.

  20. Empirical evaluation of pump inlet compliance

    NASA Technical Reports Server (NTRS)

    Ghahremani, F. G.; Rubin, S.

    1972-01-01

    Cavitation compliance was determined experimentally from pulsing tests on a number of rocket turbopumps. The primary test data used for this study are those for the Rocketdyne H-1, F-1, and J-2 oxidizer and fuel pumps employed on Saturn vehicles. The study shows that these data can be correlated by a particular form of nondimensionalization, the key feature of which is to divide the operating cavitation number or suction specific speed by its value at head breakdown. An expression is obtained for a best-fit curve for these data. Another set of test data for the Aerojet LR87 and 91 pumps can be correlated by a somewhat different nondimensional pump performance parameter, specifically by relating the cavitation number to its position between the head breakdown point and the point of zero slope of the head coefficient versus cavitation number. Recommendations are given for the estimation of the cavitation compliance for new designs in the Rocketdyne family of pumps.

  1. Information driving force and its application in agent-based modeling

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2018-04-01

    Exploring the scientific impact of online big-data has attracted much attention of researchers from different fields in recent years. Complex financial systems are typical open systems profoundly influenced by the external information. Based on the large-scale data in the public media and stock markets, we first define an information driving force, and analyze how it affects the complex financial system. The information driving force is observed to be asymmetric in the bull and bear market states. As an application, we then propose an agent-based model driven by the information driving force. Especially, all the key parameters are determined from the empirical analysis rather than from statistical fitting of the simulation results. With our model, both the stationary properties and non-stationary dynamic behaviors are simulated. Considering the mean-field effect of the external information, we also propose a few-body model to simulate the financial market in the laboratory.

  2. Proportional exponentiated link transformed hazards (ELTH) models for discrete time survival data with application

    PubMed Central

    Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook

    2015-01-01

    Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374

  3. Oscillator strengths and branching fractions of 4d75p-4d75s Rh II transitions

    NASA Astrophysics Data System (ADS)

    Bouazza, Safa

    2017-01-01

    This work reports semi-empirical determination of oscillator strengths, transition probabilities and branching fractions for Rh II 4d75p-4d75s transitions in a wide wavelength range. The angular coefficients of the transition matrix, beforehand obtained in pure SL coupling with help of Racah algebra are transformed into intermediate coupling using eigenvector amplitudes of these two configuration levels determined for this purpose; The transition integral was treated as free parameter in the least squares fit to experimental oscillator strength (gf) values found in literature. The extracted value: <4d75s|r1|4d75p> =2.7426 ± 0.0007 is slightly smaller than that computed by means of ab-initio method. Subsequently to oscillator strength evaluations, transition probabilities and branching fractions were deduced and compared to those obtained experimentally or through another approach like pseudo-relativistic Hartree-Fock model including core-polarization effects.

  4. Cluster Analysis and Gaussian Mixture Estimation of Correlated Time-Series by Means of Multi-dimensional Scaling

    NASA Astrophysics Data System (ADS)

    Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi

    We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.

  5. Simplified hydraulic model of French vertical-flow constructed wetlands.

    PubMed

    Arias, Luis; Bertrand-Krajewski, Jean-Luc; Molle, Pascal

    2014-01-01

    Designing vertical-flow constructed wetlands (VFCWs) to treat both rain events and dry weather flow is a complex task due to the stochastic nature of rain events. Dynamic models can help to improve design, but they usually prove difficult to handle for designers. This study focuses on the development of a simplified hydraulic model of French VFCWs using an empirical infiltration coefficient--infiltration capacity parameter (ICP). The model was fitted using 60-second-step data collected on two experimental French VFCW systems and compared with Hydrus 1D software. The model revealed a season-by-season evolution of the ICP that could be explained by the mechanical role of reeds. This simplified model makes it possible to define time-course shifts in ponding time and outlet flows. As ponding time hinders oxygen renewal, thus impacting nitrification and organic matter degradation, ponding time limits can be used to fix a reliable design when treating both dry and rain events.

  6. The Cusp Catastrophe Model as Cross-Sectional and Longitudinal Mixture Structural Equation Models

    PubMed Central

    Chow, Sy-Miin; Witkiewitz, Katie; Grasman, Raoul P. P. P.; Maisto, Stephen A.

    2015-01-01

    Catastrophe theory (Thom, 1972, 1993) is the study of the many ways in which continuous changes in a system’s parameters can result in discontinuous changes in one or several outcome variables of interest. Catastrophe theory–inspired models have been used to represent a variety of change phenomena in the realm of social and behavioral sciences. Despite their promise, widespread applications of catastrophe models have been impeded, in part, by difficulties in performing model fitting and model comparison procedures. We propose a new modeling framework for testing one kind of catastrophe model — the cusp catastrophe model — as a mixture structural equation model (MSEM) when cross-sectional data are available; or alternatively, as an MSEM with regime-switching (MSEM-RS) when longitudinal panel data are available. The proposed models and the advantages offered by this alternative modeling framework are illustrated using two empirical examples and a simulation study. PMID:25822209

  7. An ecological valence theory of human color preference.

    PubMed

    Palmer, Stephen E; Schloss, Karen B

    2010-05-11

    Color preference is an important aspect of visual experience, but little is known about why people in general like some colors more than others. Previous research suggested explanations based on biological adaptations [Hurlbert AC, Ling YL (2007) Curr Biol 17:623-625] and color-emotions [Ou L-C, Luo MR, Woodcock A, Wright A (2004) Color Res Appl 29:381-389]. In this article we articulate an ecological valence theory in which color preferences arise from people's average affective responses to color-associated objects. An empirical test provides strong support for this theory: People like colors strongly associated with objects they like (e.g., blues with clear skies and clean water) and dislike colors strongly associated with objects they dislike (e.g., browns with feces and rotten food). Relative to alternative theories, the ecological valence theory both fits the data better (even with fewer free parameters) and provides a more plausible, comprehensive causal explanation of color preferences.

  8. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    NASA Astrophysics Data System (ADS)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  9. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    NASA Astrophysics Data System (ADS)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  10. Development of reliable pavement models.

    DOT National Transportation Integrated Search

    2011-05-01

    The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...

  11. An Empirical Study of Synchrophasor Communication Delay in a Utility TCP/IP Network

    NASA Astrophysics Data System (ADS)

    Zhu, Kun; Chenine, Moustafa; Nordström, Lars; Holmström, Sture; Ericsson, Göran

    2013-07-01

    Although there is a plethora of literature dealing with Phasor Measurement Unit (PMU) communication delay, there has not been any effort made to generalize empirical delay results by identifying the distribution with the best fit. The existing studies typically assume a distribution or simply build on analogies to communication network routing delay. Specifically, this study provides insight into the characterization of the communication delay of both unprocessed PMU data and synchrophasors sorted by a Phasor Data Concentrator (PDC). The results suggest that a bi-modal distribution containing two normal distributions offers the best fit of the delay of the unprocessed data, whereas the delay profile of the sorted synchrophasors resembles a normal distribution based on these results, the possibility of evaluating the reliability of a synchrophasor application with respect to a particular choice of PDC timeout is discussed.

  12. The structure of post-traumatic stress disorder symptoms in three female trauma samples: A comparison of interview and self-report measures

    PubMed Central

    Scher, Christine D.; McCreary, Donald R.; Asmundson, Gordon J.G.; Resick, Patricia A.

    2009-01-01

    Empirical research increasingly suggests that post-traumatic stress disorder (PTSD) is comprised of four factors: re-experiencing, avoidance, numbing, and hyperarousal. Nonetheless, there remains some inconsistency in the findings of factor analyses that form the bulk of this empirical literature. One source of such inconsistency may be assessment measure idiosyncrasies. To examine this issue, we conducted confirmatory factor analyses of interview and self-report data across three trauma samples. Analyses of the interview data indicated a good fit for a four-factor model across all samples; analyses of the self-report data indicated an adequate fit in two of three samples. Overall, findings suggest that measure idiosyncrasies may account for some of the inconsistency in previous factor analyses of PTSD symptoms. PMID:18206346

  13. Predicting the effects of magnesium oxide nanoparticles and temperature on the thermal conductivity of water using artificial neural network and experimental data

    NASA Astrophysics Data System (ADS)

    Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid

    2017-03-01

    The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.

  14. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  15. Analysis of quality control data of eight modern radiotherapy linear accelerators: the short- and long-term behaviours of the outputs and the reproducibility of quality control measurements

    NASA Astrophysics Data System (ADS)

    Kapanen, Mika; Tenhunen, Mikko; Hämäläinen, Tuomo; Sipilä, Petri; Parkkinen, Ritva; Järvinen, Hannu

    2006-07-01

    Quality control (QC) data of radiotherapy linear accelerators, collected by Helsinki University Central Hospital between the years 2000 and 2004, were analysed. The goal was to provide information for the evaluation and elaboration of QC of accelerator outputs and to propose a method for QC data analysis. Short- and long-term drifts in outputs were quantified by fitting empirical mathematical models to the QC measurements. Normally, long-term drifts were well (<=1%) modelled by either a straight line or a single-exponential function. A drift of 2% occurred in 18 ± 12 months. The shortest drift times of only 2-3 months were observed for some new accelerators just after the commissioning but they stabilized during the first 2-3 years. The short-term reproducibility and the long-term stability of local constancy checks, carried out with a sealed plane parallel ion chamber, were also estimated by fitting empirical models to the QC measurements. The reproducibility was 0.2-0.5% depending on the positioning practice of a device. Long-term instabilities of about 0.3%/month were observed for some checking devices. The reproducibility of local absorbed dose measurements was estimated to be about 0.5%. The proposed empirical model fitting of QC data facilitates the recognition of erroneous QC measurements and abnormal output behaviour, caused by malfunctions, offering a tool to improve dose control.

  16. Measuring teacher self-report on classroom practices: Construct validity and reliability of the Classroom Strategies Scale-Teacher Form.

    PubMed

    Reddy, Linda A; Dudek, Christopher M; Fabiano, Gregory A; Peters, Stephanie

    2015-12-01

    This article presents information about the construct validity and reliability of a new teacher self-report measure of classroom instructional and behavioral practices (the Classroom Strategies Scales-Teacher Form; CSS-T). The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. Information is provided about the construct validity, internal consistency, test-retest reliability, and freedom from item-bias of the scales. Given previous investigations with the CSS Observer Form, it was hypothesized that internal consistency would be adequate and that confirmatory factor analyses (CFA) of CSS-T data from 293 classrooms would offer empirical support for the CSS-T's Total, Composite and subscales, and yield a similar factor structure to that of the CSS Observer Form. Goodness-of-fit indices of χ2/df, Root Mean Square Error of Approximation, Goodness of Fit Index, and Adjusted Goodness of Fit Index suggested satisfactory fit of proposed CFA models whereas the Comparative Fit Index did not. Internal consistency estimates of .93 and .94 were obtained for the Instructional Strategies and Behavioral Strategies Total scales respectively. Adequate test-retest reliability was found for instructional and behavioral total scales (r = .79, r = .84, percent agreement 93% and 93%). The CSS-T evidences freedom from item bias on important teacher demographics (age, educational degree, and years of teaching experience). Implications of results are discussed. (c) 2015 APA, all rights reserved).

  17. Asymmetric interaction and indeterminate fitness correlation between cooperative partners in the fig–fig wasp mutualism

    PubMed Central

    Wang, Rui-Wu; Sun, Bao-Fa; Zheng, Qi; Shi, Lei; Zhu, Lixing

    2011-01-01

    Empirical observations have shown that cooperative partners can compete for common resources, but what factors determine whether partners cooperate or compete remain unclear. Using the reciprocal fig–fig wasp mutualism, we show that nonlinear amplification of interference competition between fig wasps—which limits the fig wasps' ability to use a common resource (i.e. female flowers)—keeps the common resource unsaturated, making cooperation locally stable. When interference competition was manually prevented, the fitness correlation between figs and fig wasps went from positive to negative. This indicates that genetic relatedness or reciprocal exchange between cooperative players, which could create spatial heterogeneity or self-restraint, was not sufficient to maintain stable cooperation. Moreover, our analysis of field-collected data shows that the fitness correlation between cooperative partners varies stochastically, and that the mainly positive fitness correlation observed during the warm season shifts to a negative correlation during the cold season owing to an increase in the initial oviposition efficiency of each fig wasp. This implies that the discriminative sanction of less-cooperative wasps (i.e. by decreasing the egg deposition efficiency per fig wasp) but reward to cooperative wasps by fig, a control of the initial value, will facilitate a stable mutualism. Our finding that asymmetric interaction leading to an indeterminate fitness interaction between symbiont (i.e. cooperative actors) and host (i.e. recipient) has the potential to explain why conflict has been empirically observed in both well-documented intraspecific and interspecific cooperation systems. PMID:21490005

  18. Stochastic approach to data analysis in fluorescence correlation spectroscopy.

    PubMed

    Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo

    2006-09-21

    Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).

  19. Evaluation of Several Two-Step Scoring Functions Based on Linear Interaction Energy, Effective Ligand Size, and Empirical Pair Potentials for Prediction of Protein-Ligand Binding Geometry and Free Energy

    PubMed Central

    Rahaman, Obaidur; Estrada, Trilce P.; Doren, Douglas J.; Taufer, Michela; Brooks, Charles L.; Armen, Roger S.

    2011-01-01

    The performance of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for “step 2 discrimination” were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only “interacting” ligand atoms as the “effective size” of the ligand, and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and five-fold cross validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new dataset (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ dataset where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts. PMID:21644546

  20. Evaluation of several two-step scoring functions based on linear interaction energy, effective ligand size, and empirical pair potentials for prediction of protein-ligand binding geometry and free energy.

    PubMed

    Rahaman, Obaidur; Estrada, Trilce P; Doren, Douglas J; Taufer, Michela; Brooks, Charles L; Armen, Roger S

    2011-09-26

    The performances of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for "step 2 discrimination" were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only "interacting" ligand atoms as the "effective size" of the ligand and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and 5-fold cross-validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new data set (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ data set where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts.

Top