Sample records for parameter distribution functions

  1. Evaluation of the Three Parameter Weibull Distribution Function for Predicting Fracture Probability in Composite Materials

    DTIC Science & Technology

    1978-03-01

    for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented

  2. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  3. Probabilistic biosphere modeling for the long-term safety assessment of geological disposal facilities for radioactive waste using first- and second-order Monte Carlo simulation.

    PubMed

    Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald

    2018-10-01

    In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Collisionless distribution function for the relativistic force-free Harris sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stark, C. R.; Neukirch, T.

    A self-consistent collisionless distribution function for the relativistic analogue of the force-free Harris sheet is presented. This distribution function is the relativistic generalization of the distribution function for the non-relativistic collisionless force-free Harris sheet recently found by Harrison and Neukirch [Phys. Rev. Lett. 102, 135003 (2009)], as it has the same dependence on the particle energy and canonical momenta. We present a detailed calculation which shows that the proposed distribution function generates the required current density profile (and thus magnetic field profile) in a frame of reference in which the electric potential vanishes identically. The connection between the parameters ofmore » the distribution function and the macroscopic parameters such as the current sheet thickness is discussed.« less

  5. Quality parameters analysis of optical imaging systems with enhanced focal depth using the Wigner distribution function

    PubMed

    Zalvidea; Colautti; Sicre

    2000-05-01

    An analysis of the Strehl ratio and the optical transfer function as imaging quality parameters of optical elements with enhanced focal length is carried out by employing the Wigner distribution function. To this end, we use four different pupil functions: a full circular aperture, a hyper-Gaussian aperture, a quartic phase plate, and a logarithmic phase mask. A comparison is performed between the quality parameters and test images formed by these pupil functions at different defocus distances.

  6. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  7. Use of the Weibull function to predict future diameter distributions from current plot data

    Treesearch

    Quang V. Cao

    2012-01-01

    The Weibull function has been widely used to characterize diameter distributions in forest stands. The future diameter distribution of a forest stand can be predicted by use of a Weibull probability density function from current inventory data for that stand. The parameter recovery approach has been used to “recover” the Weibull parameters from diameter moments or...

  8. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  9. The concept of temperature in space plasmas

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2017-12-01

    Independently of the initial distribution function, once the system is thermalized, its particles are stabilized into a specific distribution function parametrized by a temperature. Classical particle systems in thermal equilibrium have their phase-space distribution stabilized into a Maxwell-Boltzmann function. In contrast, space plasmas are particle systems frequently described by stationary states out of thermal equilibrium, namely, their distribution is stabilized into a function that is typically described by kappa distributions. The temperature is well-defined for systems at thermal equilibrium or stationary states described by kappa distributions. This is based on the equivalence of the two fundamental definitions of temperature, that is (i) the kinetic definition of Maxwell (1866) and (ii) the thermodynamic definition of Clausius (1862). This equivalence holds either for Maxwellians or kappa distributions, leading also to the equipartition theorem. The temperature and kappa index (together with density) are globally independent parameters characterizing the kappa distribution. While there is no equation of state or any universal relation connecting these parameters, various local relations may exist along the streamlines of space plasmas. Observations revealed several types of such local relations among plasma thermal parameters.

  10. Derivation of Hunt equation for suspension distribution using Shannon entropy theory

    NASA Astrophysics Data System (ADS)

    Kundu, Snehasis

    2017-12-01

    In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.

  11. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  12. New methods for estimating parameters of weibull functions to characterize future diameter distributions in forest stands

    Treesearch

    Quang V. Cao; Shanna M. McCarty

    2006-01-01

    Diameter distributions in a forest stand have been successfully characterized by use of the Weibull function. Of special interest are cases where parameters of a Weibull distribution that models a future stand are predicted, either directly or indirectly, from current stand density and dominant height. This study evaluated four methods of predicting the Weibull...

  13. Measurement of myocardial blood flow by cardiovascular magnetic resonance perfusion: comparison of distributed parameter and Fermi models with single and dual bolus.

    PubMed

    Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik

    2015-02-17

    Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.

  14. Estimating Age Distributions of Base Flow in Watersheds Underlain by Single and Dual Porosity Formations Using Groundwater Transport Simulation and Weighted Weibull Functions

    NASA Astrophysics Data System (ADS)

    Sanford, W. E.

    2015-12-01

    Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.

  15. A bivariate gamma probability distribution with application to gust modeling. [for the ascent flight of the space shuttle

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Adelfang, S. I.; Tubbs, J. D.

    1982-01-01

    A five-parameter gamma distribution (BGD) having two shape parameters, two location parameters, and a correlation parameter is investigated. This general BGD is expressed as a double series and as a single series of the modified Bessel function. It reduces to the known special case for equal shape parameters. Practical functions for computer evaluations for the general BGD and for special cases are presented. Applications to wind gust modeling for the ascent flight of the space shuttle are illustrated.

  16. Hybrid artificial bee colony algorithm for parameter optimization of five-parameter bidirectional reflectance distribution function model.

    PubMed

    Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong

    2017-11-20

    A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.

  17. Oxide vapor distribution from a high-frequency sweep e-beam system

    NASA Astrophysics Data System (ADS)

    Chow, R.; Tassano, P. L.; Tsujimoto, N.

    1995-03-01

    Oxide vapor distributions have been determined as a function of operating parameters of a high frequency sweep e-beam source combined with a programmable sweep controller. We will show which parameters are significant, the parameters that yield the broadest oxide deposition distribution, and the procedure used to arrive at these conclusions. A design-of-experimental strategy was used with five operating parameters: evaporation rate, sweep speed, sweep pattern (pre-programmed), phase speed (azimuthal rotation of the pattern), profile (dwell time as a function of radial position). A design was chosen that would show which of the parameters and parameter pairs have a statistically significant effect on the vapor distribution. Witness flats were placed symmetrically across a 25 inches diameter platen. The stationary platen was centered 24 inches above the e-gun crucible. An oxide material was evaporated under 27 different conditions. Thickness measurements were made with a stylus profilometer. The information will enable users of the high frequency e-gun systems to optimally locate the source in a vacuum system and understand which parameters have a major effect on the vapor distribution.

  18. Reconstruction of atmospheric pollutant concentrations from remote sensing data - An application of distributed parameter observer theory

    NASA Technical Reports Server (NTRS)

    Koda, M.; Seinfeld, J. H.

    1982-01-01

    The reconstruction of a concentration distribution from spatially averaged and noise-corrupted data is a central problem in processing atmospheric remote sensing data. Distributed parameter observer theory is used to develop reconstructibility conditions for distributed parameter systems having measurements typical of those in remote sensing. The relation of the reconstructibility condition to the stability of the distributed parameter observer is demonstrated. The theory is applied to a variety of remote sensing situations, and it is found that those in which concentrations are measured as a function of altitude satisfy the conditions of distributed state reconstructibility.

  19. Integrating satellite actual evapotranspiration patterns into distributed model parametrization and evaluation for a mesoscale catchment

    NASA Astrophysics Data System (ADS)

    Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.

    2016-12-01

    Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.

  20. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  1. Thermomechanical Fractional Model of TEMHD Rotational Flow

    PubMed Central

    Hamza, F.; Abd El-Latief, A.; Khatan, W.

    2017-01-01

    In this work, the fractional mathematical model of an unsteady rotational flow of Xanthan gum (XG) between two cylinders in the presence of a transverse magnetic field has been studied. This model consists of two fractional parameters α and β representing thermomechanical effects. The Laplace transform is used to obtain the numerical solutions. The fractional parameter influence has been discussed graphically for the functions field distribution (temperature, velocity, stress and electric current distributions). The relationship between the rotation of both cylinders and the fractional parameters has been discussed on the functions field distribution for small and large values of time. PMID:28045941

  2. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  3. EVOLUTION OF THE VELOCITY-DISPERSION FUNCTION OF LUMINOUS RED GALAXIES: A HIERARCHICAL BAYESIAN MEASUREMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu Yiping; Bolton, Adam S.; Dawson, Kyle S.

    2012-04-15

    We present a hierarchical Bayesian determination of the velocity-dispersion function of approximately 430,000 massive luminous red galaxies observed at relatively low spectroscopic signal-to-noise ratio (S/N {approx} 3-5 per 69 km s{sup -1}) by the Baryon Oscillation Spectroscopic Survey (BOSS) of the Sloan Digital Sky Survey III. We marginalize over spectroscopic redshift errors, and use the full velocity-dispersion likelihood function for each galaxy to make a self-consistent determination of the velocity-dispersion distribution parameters as a function of absolute magnitude and redshift, correcting as well for the effects of broadband magnitude errors on our binning. Parameterizing the distribution at each point inmore » the luminosity-redshift plane with a log-normal form, we detect significant evolution in the width of the distribution toward higher intrinsic scatter at higher redshifts. Using a subset of deep re-observations of BOSS galaxies, we demonstrate that our distribution-parameter estimates are unbiased regardless of spectroscopic S/N. We also show through simulation that our method introduces no systematic parameter bias with redshift. We highlight the advantage of the hierarchical Bayesian method over frequentist 'stacking' of spectra, and illustrate how our measured distribution parameters can be adopted as informative priors for velocity-dispersion measurements from individual noisy spectra.« less

  4. Measurement of Device Parameters Using Image Recovery Techniques in Large-Scale IC Devices

    NASA Technical Reports Server (NTRS)

    Scheick, Leif; Edmonds, Larry

    2004-01-01

    Devices that respond to radiation on a cell level will produce histograms showing the relative frequency of cell damage as a function of damage. The measured distribution is the convolution of distributions from radiation responses, measurement noise, and manufacturing parameters. A method of extracting device characteristics and parameters from measured distributions via mathematical and image subtraction techniques is described.

  5. Nonequilibrium approach regarding metals from a linearised kappa distribution

    NASA Astrophysics Data System (ADS)

    Domenech-Garret, J. L.

    2017-10-01

    The widely used kappa distribution functions develop high-energy tails through an adjustable kappa parameter. The aim of this work is to show that such a parameter can itself be regarded as a function, which entangles information about the sources of disequilibrium. We first derive and analyse an expanded Fermi-Dirac kappa distribution. Later, we use this expanded form to obtain an explicit analytical expression for the kappa parameter of a heated metal on which an external electric field is applied. We show that such a kappa index causes departures from equilibrium depending on the physical magnitudes. Finally, we study the role of temperature and electric field on such a parameter, which characterises the electron population of a metal out of equilibrium.

  6. Study of the zinc-silver oxide battery system

    NASA Technical Reports Server (NTRS)

    Nanis, L.

    1973-01-01

    Theoretical and experimental models for the evaluation of current distribution in flooded, porous electrodes are discussed. An approximation for the local current distribution function was derived for conditions of a linear overpotential, a uniform concentration, and a very conductive matrix. By considering the porous electrode to be an analog of chemical catalyst structures, a dimensionless performance parameter was derived from the approximated current distribution function. In this manner the electrode behavior was characterized in terms of an electrochemical Thiele parameter and an effectiveness factor. It was shown that the electrochemical engineering approach makes possible the organizations of theoretical descriptions and of practical experience in the form of dimensionless parameters, such as the electrochemical Thiele parameters, and hence provides useful information for the design of new electrochemical systems.

  7. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  8. dftools: Distribution function fitting

    NASA Astrophysics Data System (ADS)

    Obreschkow, Danail

    2018-05-01

    dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

  9. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  10. Determine Neuronal Tuning Curves by Exploring Optimum Firing Rate Distribution for Information Efficiency

    PubMed Central

    Han, Fang; Wang, Zhijie; Fan, Hong

    2017-01-01

    This paper proposed a new method to determine the neuronal tuning curves for maximum information efficiency by computing the optimum firing rate distribution. Firstly, we proposed a general definition for the information efficiency, which is relevant to mutual information and neuronal energy consumption. The energy consumption is composed of two parts: neuronal basic energy consumption and neuronal spike emission energy consumption. A parameter to model the relative importance of energy consumption is introduced in the definition of the information efficiency. Then, we designed a combination of exponential functions to describe the optimum firing rate distribution based on the analysis of the dependency of the mutual information and the energy consumption on the shape of the functions of the firing rate distributions. Furthermore, we developed a rapid algorithm to search the parameter values of the optimum firing rate distribution function. Finally, we found with the rapid algorithm that a combination of two different exponential functions with two free parameters can describe the optimum firing rate distribution accurately. We also found that if the energy consumption is relatively unimportant (important) compared to the mutual information or the neuronal basic energy consumption is relatively large (small), the curve of the optimum firing rate distribution will be relatively flat (steep), and the corresponding optimum tuning curve exhibits a form of sigmoid if the stimuli distribution is normal. PMID:28270760

  11. Precision Parameter Estimation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  12. Interpretation of environmental tracers in groundwater systems with stagnant water zones.

    PubMed

    Maloszewski, Piotr; Stichler, Willibald; Zuber, Andrzej

    2004-03-01

    Lumped-parameter models are commonly applied for determining the age of water from time records of transient environmental tracers. The simplest models (e.g. piston flow or exponential) are also applicable for dating based on the decay or accumulation of tracers in groundwater systems. The models are based on the assumption that the transit time distribution function (exit age distribution function) of the tracer particles in the investigated system adequately represents the distribution of flow lines and is described by a simple function. A chosen or fitted function (called the response function) describes the transit time distribution of a tracer which would be observed at the output (discharge area, spring, stream, or pumping wells) in the case of an instantaneous injection at the entrance (recharge area). Due to large space and time scales, response functions are not measurable in groundwater systems, therefore, functions known from other fields of science, mainly from chemical engineering, are usually used. The type of response function and the values of its parameters define the lumped-parameter model of a system. The main parameter is the mean transit time of tracer through the system, which under favourable conditions may represent the mean age of mobile water. The parameters of the model are found by fitting calculated concentrations to the experimental records of concentrations measured at the outlet. The mean transit time of tracer (often called the tracer age), whether equal to the mean age of water or not, serves in adequate combinations with other data for determining other useful parameters, e.g. the recharge rate or the content of water in the system. The transit time distribution and its mean value serve for confirmation or determination of the conceptual model of the system and/or estimation of its potential vulnerability to anthropogenic pollution. In the interpretation of environmental tracer data with the aid of the lumped-parameter models, the influence of diffusion exchange between mobile water and stagnant or quasi-stagnant water is seldom considered, though it leads to large differences between tracer and water ages. Therefore, the article is focused on the transit time distribution functions of the most common lumped-parameter models, particularly those applicable for the interpretation of environmental tracer data in double-porosity aquifers, or aquifers in which aquitard diffusion may play an important role. A case study is recalled for a confined aquifer in which the diffusion exchange with aquitard most probably strongly influenced the transport of environmental tracers. Another case study presented is related to the interpretation of environmental tracer data obtained from lysimeters installed in the unsaturated zone with a fraction of stagnant water.

  13. Maps on statistical manifolds exactly reduced from the Perron-Frobenius equations for solvable chaotic maps

    NASA Astrophysics Data System (ADS)

    Goto, Shin-itiro; Umeno, Ken

    2018-03-01

    Maps on a parameter space for expressing distribution functions are exactly derived from the Perron-Frobenius equations for a generalized Boole transform family. Here the generalized Boole transform family is a one-parameter family of maps, where it is defined on a subset of the real line and its probability distribution function is the Cauchy distribution with some parameters. With this reduction, some relations between the statistical picture and the orbital one are shown. From the viewpoint of information geometry, the parameter space can be identified with a statistical manifold, and then it is shown that the derived maps can be characterized. Also, with an induced symplectic structure from a statistical structure, symplectic and information geometric aspects of the derived maps are discussed.

  14. The correlation function for density perturbations in an expanding universe. IV - The evolution of the correlation function. [galaxy distribution

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1979-01-01

    The evolution of the two-point correlation function for the large-scale distribution of galaxies in an expanding universe is studied on the assumption that the perturbation densities lie in a Gaussian distribution centered on any given mass scale. The perturbations are evolved according to the Friedmann equation, and the correlation function for the resulting distribution of perturbations at the present epoch is calculated. It is found that: (1) the computed correlation function gives a satisfactory fit to the observed function in cosmological models with a density parameter (Omega) of approximately unity, provided that a certain free parameter is suitably adjusted; (2) the power-law slope in the nonlinear regime reflects the initial fluctuation spectrum, provided that the density profile of individual perturbations declines more rapidly than the -2.4 power of distance; and (3) both positive and negative contributions to the correlation function are predicted for cosmological models with Omega less than unity.

  15. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  16. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  17. Stochastic analysis of particle movement over a dune bed

    USGS Publications Warehouse

    Lee, Baum K.; Jobson, Harvey E.

    1977-01-01

    Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)

  18. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  19. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  20. Detecting background changes in environments with dynamic foreground by separating probability distribution function mixtures using Pearson's method of moments

    NASA Astrophysics Data System (ADS)

    Jenkins, Colleen; Jordan, Jay; Carlson, Jeff

    2007-02-01

    This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.

  1. Construction and identification of a D-Vine model applied to the probability distribution of modal parameters in structural dynamics

    NASA Astrophysics Data System (ADS)

    Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.

    2018-01-01

    This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.

  2. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.

  3. Quasi-Newton methods for parameter estimation in functional differential equations

    NASA Technical Reports Server (NTRS)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  4. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  5. Linear dispersion properties of ring velocity distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandas, Marek, E-mail: marek.vandas@asu.cas.cz; Hellinger, Petr; Institute of Atmospheric Physics, AS CR, Bocni II/1401, CZ-14100 Prague

    2015-06-15

    Linear properties of ring velocity distribution functions are investigated. The dispersion tensor in a form similar to the case of a Maxwellian distribution function, but for a general distribution function separable in velocities, is presented. Analytical forms of the dispersion tensor are derived for two cases of ring velocity distribution functions: one obtained from physical arguments and one for the usual, ad hoc ring distribution. The analytical expressions involve generalized hypergeometric, Kampé de Fériet functions of two arguments. For a set of plasma parameters, the two ring distribution functions are compared. At the parallel propagation with respect to the ambientmore » magnetic field, the two ring distributions give the same results identical to the corresponding bi-Maxwellian distribution. At oblique propagation, the two ring distributions give similar results only for strong instabilities, whereas for weak growth rates their predictions are significantly different; the two ring distributions have different marginal stability conditions.« less

  6. Electron acoustic nonlinear structures in planetary magnetospheres

    NASA Astrophysics Data System (ADS)

    Shah, K. H.; Qureshi, M. N. S.; Masood, W.; Shah, H. A.

    2018-04-01

    In this paper, we have studied linear and nonlinear propagation of electron acoustic waves (EAWs) comprising cold and hot populations in which the ions form the neutralizing background. The hot electrons have been assumed to follow the generalized ( r , q ) distribution which has the advantage that it mimics most of the distribution functions observed in space plasmas. Interestingly, it has been found that unlike Maxwellian and kappa distributions, the electron acoustic waves admit not only rarefactive structures but also allow the formation of compressive solitary structures for generalized ( r , q ) distribution. It has been found that the flatness parameter r , tail parameter q , and the nonlinear propagation velocity u affect the propagation characteristics of nonlinear EAWs. Using the plasmas parameters, typically found in Saturn's magnetosphere and the Earth's auroral region, where two populations of electrons and electron acoustic solitary waves (EASWs) have been observed, we have given an estimate of the scale lengths over which these nonlinear waves are expected to form and how the size of these structures would vary with the change in the shape of the distribution function and with the change of the plasma parameters.

  7. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.

    2011-01-01

    A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.

  8. Efficient estimation of Pareto model: Some modified percentile estimators.

    PubMed

    Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali

    2018-01-01

    The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.

  9. Multivariate η-μ fading distribution with arbitrary correlation model

    NASA Astrophysics Data System (ADS)

    Ghareeb, Ibrahim; Atiani, Amani

    2018-03-01

    An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.

  10. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  11. The beta Burr type X distribution properties with application.

    PubMed

    Merovci, Faton; Khaleel, Mundher Abdullah; Ibrahim, Noor Akma; Shitan, Mahendran

    2016-01-01

    We develop a new continuous distribution called the beta-Burr type X distribution that extends the Burr type X distribution. The properties provide a comprehensive mathematical treatment of this distribution. Further more, various structural properties of the new distribution are derived, that includes moment generating function and the rth moment thus generalizing some results in the literature. We also obtain expressions for the density, moment generating function and rth moment of the order statistics. We consider the maximum likelihood estimation to estimate the parameters. Additionally, the asymptotic confidence intervals for the parameters are derived from the Fisher information matrix. Finally, simulation study is carried at under varying sample size to assess the performance of this model. Illustration the real dataset indicates that this new distribution can serve as a good alternative model to model positive real data in many areas.

  12. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    PubMed

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  13. Estimating Commute Distances of U.S. Army Reservists by Regional and Unit Characteristics

    DTIC Science & Technology

    1990-09-01

    multiple regression equation is used to estimate the parameters of the commute distance distribution as a function of reserve center and market ...used to estimate the parameters of the commute distance distribution as a function of reserve center and market characteristics. The results of the...recruiting personnel to meet unit fill rates. An important objective of the USAREC is to identify market areas that will support new reserve units [Ref. 2:p

  14. Use of Bayes theorem to correct size-specific sampling bias in growth data.

    PubMed

    Troynikov, V S

    1999-03-01

    The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.

  15. A general framework for updating belief distributions.

    PubMed

    Bissiri, P G; Holmes, C C; Walker, S G

    2016-11-01

    We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.

  16. Dynamics of modulated beams in spectral domain

    DOE PAGES

    Yampolsky, Nikolai A.

    2017-07-16

    General formalism for describing dynamics of modulated beams along linear beamlines is developed. We describe modulated beams with spectral distribution function which represents Fourier transform of the conventional beam distribution function in the 6-dimensional phase space. The introduced spectral distribution function is localized in some region of the spectral domain for nearly monochromatic modulations. It can be characterized with a small number of typical parameters such as the lowest order moments of the spectral distribution. We study evolution of the modulated beams in linear beamlines and find that characteristic spectral parameters transform linearly. The developed approach significantly simplifies analysis ofmore » various schemes proposed for seeding X-ray free electron lasers. We use this approach to study several recently proposed schemes and find the bandwidth of the output bunching in each case.« less

  17. Theoretical study of the dependence of single impurity Anderson model on various parameters within distributional exact diagonalization method

    NASA Astrophysics Data System (ADS)

    Syaina, L. P.; Majidi, M. A.

    2018-04-01

    Single impurity Anderson model describes a system consisting of non-interacting conduction electrons coupled with a localized orbital having strongly interacting electrons at a particular site. This model has been proven successful to explain the phenomenon of metal-insulator transition through Anderson localization. Despite the well-understood behaviors of the model, little has been explored theoretically on how the model properties gradually evolve as functions of hybridization parameter, interaction energy, impurity concentration, and temperature. Here, we propose to do a theoretical study on those aspects of a single impurity Anderson model using the distributional exact diagonalization method. We solve the model Hamiltonian by randomly generating sampling distribution of some conducting electron energy levels with various number of occupying electrons. The resulting eigenvalues and eigenstates are then used to define the local single-particle Green function for each sampled electron energy distribution using Lehmann representation. Later, we extract the corresponding self-energy of each distribution, then average over all the distributions and construct the local Green function of the system to calculate the density of states. We repeat this procedure for various values of those controllable parameters, and discuss our results in connection with the criteria of the occurrence of metal-insulator transition in this system.

  18. Product of Ginibre matrices: Fuss-Catalan and Raney distributions

    NASA Astrophysics Data System (ADS)

    Penson, Karol A.; Życzkowski, Karol

    2011-06-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions Ps(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions Ps(x) in terms of a combination of s hypergeometric functions of the type sFs-1. The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.

  19. Product of Ginibre matrices: Fuss-Catalan and Raney distributions.

    PubMed

    Penson, Karol A; Zyczkowski, Karol

    2011-06-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions P(s)(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions P(s)(x) in terms of a combination of s hypergeometric functions of the type (s)F(s-1). The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.

  20. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  1. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  2. Parameters of Higher Education Quality Assessment System at Universities

    ERIC Educational Resources Information Center

    Savickiene, Izabela

    2005-01-01

    The article analyses the system of institutional quality assessment at universities and lays foundation to its functional, morphological and processual parameters. It also presents the concept of the system and discusses the distribution of systems into groups, defines information, accountability, improvement and benchmarking functions of higher…

  3. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  4. A New Lifetime Distribution with Bathtube and Unimodal Hazard Function

    NASA Astrophysics Data System (ADS)

    Barriga, Gladys D. C.; Louzada-Neto, Francisco; Cancho, Vicente G.

    2008-11-01

    In this paper we propose a new lifetime distribution which accommodate bathtub-shaped, unimodal, increasing and decreasing hazard function. Some special particular cases are derived, including the standard Weibull distribution. Maximum likelihood estimation is considered for estimate the tree parameters present in the model. The methodology is illustrated in a real data set on industrial devices on a lite test.

  5. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.

  6. Distributed traffic signal control using fuzzy logic

    NASA Technical Reports Server (NTRS)

    Chiu, Stephen

    1992-01-01

    We present a distributed approach to traffic signal control, where the signal timing parameters at a given intersection are adjusted as functions of the local traffic condition and of the signal timing parameters at adjacent intersections. Thus, the signal timing parameters evolve dynamically using only local information to improve traffic flow. This distributed approach provides for a fault-tolerant, highly responsive traffic management system. The signal timing at an intersection is defined by three parameters: cycle time, phase split, and offset. We use fuzzy decision rules to adjust these three parameters based only on local information. The amount of change in the timing parameters during each cycle is limited to a small fraction of the current parameters to ensure smooth transition. We show the effectiveness of this method through simulation of the traffic flow in a network of controlled intersections.

  7. Modeling of particle radiative properties in coal combustion depending on burnout

    NASA Astrophysics Data System (ADS)

    Gronarz, Tim; Habermehl, Martin; Kneer, Reinhold

    2017-04-01

    In the present study, absorption and scattering efficiencies as well as the scattering phase function of a cloud of coal particles are described as function of the particle combustion progress. Mie theory for coated particles is applied as mathematical model. The scattering and absorption properties are determined by several parameters: size distribution, spectral distribution of incident radiation and spectral index of refraction of the particles. A study to determine the influence of each parameter is performed, finding that the largest effect is due to the refractive index, followed by the effect of size distribution. The influence of the incident radiation profile is negligible. As a part of this study, the possibility of applying a constant index of refraction is investigated. Finally, scattering and absorption efficiencies as well as the phase function are presented as a function of burnout with the presented model and the results are discussed.

  8. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    NASA Astrophysics Data System (ADS)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  9. Statistical Characterization of the Mechanical Parameters of Intact Rock Under Triaxial Compression: An Experimental Proof of the Jinping Marble

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo

    2016-12-01

    We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.

  10. A testable model of earthquake probability based on changes in mean event size

    NASA Astrophysics Data System (ADS)

    Imoto, Masajiro

    2003-02-01

    We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.

  11. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  12. Determination of Anisotropic Ion Velocity Distribution Function in Intrinsic Gas Plasma. Theory.

    NASA Astrophysics Data System (ADS)

    Mustafaev, A.; Grabovskiy, A.; Murillo, O.; Soukhomlinov, V.

    2018-02-01

    The first seven coefficients of the expansion of the energy and angular distribution functions in Legendre polynomials for Hg+ ions in Hg vapor plasma with the parameter E/P ≈ 400 V/(cm Torr) are measured for the first time using a planar one-sided probe. The analytic solution to the Boltzmann kinetic equation for ions in the plasma of their parent gas is obtained in the conditions when the resonant charge exchange is the predominant process, and ions acquire on their mean free path a velocity much higher than the characteristic velocity of thermal motion of atoms. The presence of an ambipolar field of an arbitrary strength is taken into account. It is shown that the ion velocity distribution function is determined by two parameters and differs substantially from the Maxwellian distribution. Comparison of the results of calculation of the drift velocity of He+ ions in He, Ar+ in Ar, and Hg+ in Hg with the available experimental data shows their conformity. The results of the calculation of the ion distribution function correctly describe the experimental data obtained from its measurement. Analysis of the result shows that in spite of the presence of the strong field, the ion velocity distribution functions are isotropic for ion velocities lower than the average thermal velocity of atoms. With increasing ion velocity, the distribution becomes more and more extended in the direction of the electric field.

  13. Forward and backward uncertainty propagation: an oxidation ditch modelling example.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G

    2003-01-01

    In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.

  14. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  15. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    NASA Astrophysics Data System (ADS)

    Janković, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  16. Wigner distribution function and kurtosis parameter of vortex beams propagating through turbulent atmosphere

    NASA Astrophysics Data System (ADS)

    Suo, Qiangbo; Han, Yiping; Cui, Zhiwei

    2017-09-01

    Based on the extended Huygens-Fresnel integral, the analytical expressions for the Wigner distribution function (WDF) and kurtosis parameter of partially coherent flat-topped vortex (PCFTV) beams propagating through atmospheric turbulence and free space are derived. The WDF and kurtosis parameter of PCFTV beams through turbulent atmosphere are discussed with numerical examples. The numerical results show that the beam quality depends on the structure constants, the inner scale turbulence, the outer scale turbulence, the spatial correlation length, the wave length and the beam order. PCFTV beams are less affected by turbulence than partially flat-topped coherent (PCFT) beams under the same conditions, and will be useful in free-space optical communications.

  17. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  18. Hazard function analysis for flood planning under nonstationarity

    NASA Astrophysics Data System (ADS)

    Read, Laura K.; Vogel, Richard M.

    2016-05-01

    The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.

  19. An algebraic aspect of Pareto mixture parameter estimation using censored sample: A Bayesian approach.

    PubMed

    Saleem, Muhammad; Sharif, Kashif; Fahmi, Aliya

    2018-04-27

    Applications of Pareto distribution are common in reliability, survival and financial studies. In this paper, A Pareto mixture distribution is considered to model a heterogeneous population comprising of two subgroups. Each of two subgroups is characterized by the same functional form with unknown distinct shape and scale parameters. Bayes estimators have been derived using flat and conjugate priors using squared error loss function. Standard errors have also been derived for the Bayes estimators. An interesting feature of this study is the preparation of components of Fisher Information matrix.

  20. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  1. A modified weighted function method for parameter estimation of Pearson type three distribution

    NASA Astrophysics Data System (ADS)

    Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo

    2014-04-01

    In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.

  2. Observation of Chorus Waves by the Van Allen Probes: Dependence on Solar Wind Parameters and Scale Size

    NASA Technical Reports Server (NTRS)

    Aryan, Homayon; Sibeck, David; Balikhin, Michael; Agapitov, Oleksiy; Kletzing, Craig

    2016-01-01

    Highly energetic electrons in the Earths Van Allen radiation belts can cause serious damage to spacecraft electronic systems and affect the atmospheric composition if they precipitate into the upper atmosphere. Whistler mode chorus waves have attracted significant attention in recent decades for their crucial role in the acceleration and loss of energetic electrons that ultimately change the dynamics of the radiation belts. The distribution of these waves in the inner magnetosphere is commonly presented as a function of geomagnetic activity. However, geomagnetic indices are nonspecific parameters that are compiled from imperfectly covered ground based measurements. The present study uses wave data from the two Van Allen Probes to present the distribution of lower band chorus waves not only as functions of single geomagnetic index and solar wind parameters but also as functions of combined parameters. Also the current study takes advantage of the unique equatorial orbit of the Van Allen Probes to estimate the average scale size of chorus wave packets, during close separations between the two spacecraft, as a function of radial distance, magnetic latitude, and geomagnetic activity, respectively. Results show that the average scale size of chorus wave packets is approximately 13002300 km. The results also show that the inclusion of combined parameters can provide better representation of the chorus wave distributions in the inner magnetosphere and therefore can further improve our knowledge of the acceleration and loss of radiation belt electrons.

  3. Double density dynamics: realizing a joint distribution of a physical system and a parameter system

    NASA Astrophysics Data System (ADS)

    Fukuda, Ikuo; Moritsugu, Kei

    2015-11-01

    To perform a variety of types of molecular dynamics simulations, we created a deterministic method termed ‘double density dynamics’ (DDD), which realizes an arbitrary distribution for both physical variables and their associated parameters simultaneously. Specifically, we constructed an ordinary differential equation that has an invariant density relating to a joint distribution of the physical system and the parameter system. A generalized density function leads to a physical system that develops under nonequilibrium environment-describing superstatistics. The joint distribution density of the physical system and the parameter system appears as the Radon-Nikodym derivative of a distribution that is created by a scaled long-time average, generated from the flow of the differential equation under an ergodic assumption. The general mathematical framework is fully discussed to address the theoretical possibility of our method, and a numerical example representing a 1D harmonic oscillator is provided to validate the method being applied to the temperature parameters.

  4. LASER BIOLOGY AND MEDICINE: Light scattering study of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Beuthan, J.; Netz, U.; Minet, O.; Klose, Annerose D.; Hielscher, A. H.; Scheel, A.; Henniger, J.; Müller, G.

    2002-11-01

    The distribution of light scattered by finger joints is studied in the near-IR region. It is shown that variations in the optical parameters of the tissue (scattering coefficient μs, absorption coefficient μa, and anisotropy factor g) depend on the presence of the rheumatoid arthritis (RA). At the first stage, the distribution of scattered light was measured in diaphanoscopic experiments. The convolution of a Gaussian error function with the scattering phase function proved to be a good approximation of the data obtained. Then, a new method was developed for the reconstruction of distribution of optical parameters in the finger cross section. Model tests of the quality of this reconstruction method show good results.

  5. Beyond Zipf's Law: The Lavalette Rank Function and Its Properties.

    PubMed

    Fontanelli, Oscar; Miramontes, Pedro; Yang, Yaning; Cocho, Germinal; Li, Wentian

    Although Zipf's law is widespread in natural and social data, one often encounters situations where one or both ends of the ranked data deviate from the power-law function. Previously we proposed the Beta rank function to improve the fitting of data which does not follow a perfect Zipf's law. Here we show that when the two parameters in the Beta rank function have the same value, the Lavalette rank function, the probability density function can be derived analytically. We also show both computationally and analytically that Lavalette distribution is approximately equal, though not identical, to the lognormal distribution. We illustrate the utility of Lavalette rank function in several datasets. We also address three analysis issues on the statistical testing of Lavalette fitting function, comparison between Zipf's law and lognormal distribution through Lavalette function, and comparison between lognormal distribution and Lavalette distribution.

  6. Exact Scheffé-type confidence intervals for output from groundwater flow models: 2. Combined use of hydrogeologic information and calibration data

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.

  7. Raindrop Size Distribution in Different Climatic Regimes from Disdrometer and Dual-Polarized Radar Analysis.

    NASA Astrophysics Data System (ADS)

    Bringi, V. N.; Chandrasekar, V.; Hubbert, J.; Gorgucci, E.; Randeu, W. L.; Schoenhuber, M.

    2003-01-01

    The application of polarimetric radar data to the retrieval of raindrop size distribution parameters and rain rate in samples of convective and stratiform rain types is presented. Data from the Colorado State University (CSU), CHILL, NCAR S-band polarimetric (S-Pol), and NASA Kwajalein radars are analyzed for the statistics and functional relation of these parameters with rain rate. Surface drop size distribution measurements using two different disdrometers (2D video and RD-69) from a number of climatic regimes are analyzed and compared with the radar retrievals in a statistical and functional approach. The composite statistics based on disdrometer and radar retrievals suggest that, on average, the two parameters (generalized intercept and median volume diameter) for stratiform rain distributions lie on a straight line with negative slope, which appears to be consistent with variations in the microphysics of stratiform precipitation (melting of larger, dry snow particles versus smaller, rimed ice particles). In convective rain, `maritime-like' and `continental-like' clusters could be identified in the same two-parameter space that are consistent with the different multiplicative coefficients in the Z = aR1.5 relations quoted in the literature for maritime and continental regimes.

  8. Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions.

    PubMed

    Nishino, Ko; Lombardi, Stephen

    2011-01-01

    We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.

  9. Calculation of the Poisson cumulative distribution function

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  10. On the robustness of a Bayes estimate. [in reliability theory

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandes, P. A.; Lynch, K. A.

    Here, we define the observational parameter regime necessary for observing low-altitude ionospheric origins of high-latitude ion upflow/outflow. We present measurement challenges and identify a new analysis technique which mitigates these impediments. To probe the initiation of auroral ion upflow, it is necessary to examine the thermal ion population at 200-350 km, where typical thermal energies are tenths of eV. Interpretation of the thermal ion distribution function measurement requires removal of payload sheath and ram effects. We use a 3-D Maxwellian model to quantify how observed ionospheric parameters such as density, temperature, and flows affect in situ measurements of the thermalmore » ion distribution function. We define the viable acceptance window of a typical top-hat electrostatic analyzer in this regime and show that the instrument's energy resolution prohibits it from directly observing the shape of the particle spectra. To extract detailed information about measured particle population, we define two intermediate parameters from the measured distribution function, then use a Maxwellian model to replicate possible measured parameters for comparison to the data. Liouville's theorem and the thin-sheath approximation allow us to couple the measured and modeled intermediate parameters such that measurements inside the sheath provide information about plasma outside the sheath. We apply this technique to sounding rocket data to show that careful windowing of the data and Maxwellian models allows for extraction of the best choice of geophysical parameters. More widespread use of this analysis technique will help our community expand its observational database of the seed regions of ionospheric outflows.« less

  12. Kappa-Electrons Downstream of the Solar Wind Termination Shock

    NASA Astrophysics Data System (ADS)

    Fahr, H. J.

    2017-12-01

    A theoretical description of the solar wind electron distribution function downstream of the termination shock under the influence of the shock-induced injection of overshooting KeV-energetic electrons will be presented. A kinetic phasespace transport equation in the bulk frame of the heliosheath plasma flow is developed for the solar wind electrons, taking into account shock-induced electron injection, convective changes, magnetic cooling processes and whistler wave-induced energy diffusion. Assuming that the local electron distribution under the prevailing Non-LTE conditions can be represented by a local kappa function with a local kappa parameter that varies with the streamline coordinates, we determine the parameters of the resulting, initial kappa distribution for the downstream electrons. From this initial function spectral electron fluxes can be derived and can be compared with those measured by the VOYAGER-1 spacecraft in the range between 40 to 70 KeV. It can then be shown that with kappa values around kappa = 6 one can in fact fit these data very satisfactorily. In addition it is shown that for isentropic electron flows kappa-distributed electrons have to undergo simultaneous changes of both parameters, i.e. kappa and theta, of the electron kappa function. It is also shown then that under the influence of energy sinks and sources the electron flux becomes non-isentropic with electron entropies changing along the streamline.

  13. Cometary water-group ions in the region surrounding Comet Giacobini-Zinner - Distribution functions and bulk parameter estimates

    NASA Astrophysics Data System (ADS)

    Staines, K.; Balogh, A.; Cowley, S. W. H.; Hynds, R. J.; Yates, T. S.; Richardson, I. G.; Sanderson, T. R.; Wenzel, K. P.; McComas, D. J.; Tsurutani, B. T.

    1991-03-01

    The bulk parameters (number density and thermal energy density) of cometary water-group ions in the region surrounding Comet Giacobini-Zinner have been derived using data from the EPAS instrument on the ICE spacecraft. The derivation is based on the assumption that the pick-up ion distribution function is isotropic in the frame of the bulk flow, an approximation which has previously been shown to be reasonable within about 400,000 km of the comet nucleus along the spacecraft trajectory. The transition between the pick-up and mass-loaded regions occurs at the cometary shock, which was traversed at a cometocentric distance of about 100,000 km along the spacecraft track. Examination of the ion distribution functions in this region, transformed to the bulk flow frame, indicates the occurrence of a flattened distribution in the vicinity of the local pick-up speed, and a steeply falling tail at speeds above, which may be approximated as an exponential in ion speed.

  14. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  15. Temperature evolution of the local order parameter in relaxor ferroelectrics (1 - x)PMN-xPZT

    NASA Astrophysics Data System (ADS)

    Gridnev, S. A.; Glazunov, A. A.; Tsotsorin, A. N.

    2005-09-01

    The temperature dependence of the local order parameter and relaxation time distribution function have been determined in (1 - x)PMN-xPZT ceramic samples via dielectric permittivity. Above the Burns temperature, the permittivity was found to follow the Currie-Weiss law, and with temperature decreasing the deviation was observed to increase. A local order parameter was calculated from the dielectric data using a modified Landau-Devonshire approach. These results are compared to the distribution function of relaxation times. It was found that a glasslike freezing of reorientable polar clusters occurs in the temperature range of diffuse relaxor transition. The evolution of the studied system to more ordered state arises from the increased PZT content.

  16. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    PubMed

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  17. Light scattering study of rheumatoid arthritis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beuthan, J; Netz, U; Minet, O

    The distribution of light scattered by finger joints is studied in the near-IR region. It is shown that variations in the optical parameters of the tissue (scattering coefficient {mu}{sub s}, absorption coefficient {mu}{sub a}, and anisotropy factor g) depend on the presence of the rheumatoid arthritis (RA). At the first stage, the distribution of scattered light was measured in diaphanoscopic experiments. The convolution of a Gaussian error function with the scattering phase function proved to be a good approximation of the data obtained. Then, a new method was developed for the reconstruction of distribution of optical parameters in the fingermore » cross section. Model tests of the quality of this reconstruction method show good results. (laser biology and medicine)« less

  18. Design of Magnetic Charged Particle Lens Using Analytical Potential Formula

    NASA Astrophysics Data System (ADS)

    Al-Batat, A. H.; Yaseen, M. J.; Abbas, S. R.; Al-Amshani, M. S.; Hasan, H. S.

    2018-05-01

    In the current research was to benefit from the potential of the two cylindrical electric lenses to be used in the product a mathematical model from which, one can determine the magnetic field distribution of the charged particle objective lens. With aid of simulink in matlab environment, some simulink models have been building to determine the distribution of the target function and their related axial functions along the optical axis of the charged particle lens. The present study showed that the physical parameters (i.e., the maximum value, Bmax, and the half width W of the field distribution) and the objective properties of the charged particle lens have been affected by varying the main geometrical parameter of the lens named the bore radius R.

  19. Extreme deconvolution: Inferring complete distribution functions from noisy, heterogeneous and incomplete observations

    NASA Astrophysics Data System (ADS)

    Bovy Jo; Hogg, David W.; Roweis, Sam T.

    2011-06-01

    We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.

  20. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.

  1. Inverse Gaussian gamma distribution model for turbulence-induced fading in free-space optical communication.

    PubMed

    Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin

    2018-04-20

    We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.

  2. mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons

    NASA Astrophysics Data System (ADS)

    Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris

    2018-02-01

    mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.

  3. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  4. Freezing lines of colloidal Yukawa spheres. II. Local structure and characteristic lengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gapinski, Jacek, E-mail: gapinski@amu.edu.pl; Patkowski, Adam; NanoBioMedical Center, A. Mickiewicz University, Umultowska 85, 61-614 Poznań

    Using the Rogers-Young (RY) integral equation scheme for the static pair correlation functions combined with the liquid-phase Hansen-Verlet freezing rule, we study the generic behavior of the radial distribution function and static structure factor of monodisperse charge-stabilized suspensions with Yukawa-type repulsive particle interactions at freezing. In a related article, labeled Paper I [J. Gapinski, G. Nägele, and A. Patkowski, J. Chem. Phys. 136, 024507 (2012)], this hybrid method was used to determine two-parameter freezing lines for experimentally controllable parameters, characteristic of suspensions of charged silica spheres in dimethylformamide. A universal scaling of the RY radial distribution function maximum is shownmore » to apply to the liquid-bcc and liquid-fcc segments of the universal freezing line. A thorough analysis is made of the behavior of characteristic distances and wavenumbers, next-neighbor particle coordination numbers, osmotic compressibility factor, and the Ravaché-Mountain-Streett minimum-maximum radial distribution function ratio.« less

  5. Analytic modeling of aerosol size distributions

    NASA Technical Reports Server (NTRS)

    Deepack, A.; Box, G. P.

    1979-01-01

    Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.

  6. Made-to-measure modelling of observed galaxy dynamics

    NASA Astrophysics Data System (ADS)

    Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.

    2018-01-01

    Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.

  7. Dust particle radial confinement in a dc glow discharge.

    PubMed

    Sukhinin, G I; Fedoseev, A V; Antipov, S N; Petrov, O F; Fortov, V E

    2013-01-01

    A self-consistent nonlocal model of the positive column of a dc glow discharge with dust particles is presented. Radial distributions of plasma parameters and the dust component in an axially homogeneous glow discharge are considered. The model is based on the solution of a nonlocal Boltzmann equation for the electron energy distribution function, drift-diffusion equations for ions, and the Poisson equation for a self-consistent electric field. The radial distribution of dust particle density in a dust cloud was fixed as a given steplike function or was chosen according to an equilibrium Boltzmann distribution. The balance of electron and ion production in argon ionization by an electron impact and their losses on the dust particle surface and on the discharge tube walls is taken into account. The interrelation of discharge plasma and the dust cloud is studied in a self-consistent way, and the radial distributions of the discharge plasma and dust particle parameters are obtained. It is shown that the influence of the dust cloud on the discharge plasma has a nonlocal behavior, e.g., density and charge distributions in the dust cloud substantially depend on the plasma parameters outside the dust cloud. As a result of a self-consistent evolution of plasma parameters to equilibrium steady-state conditions, ionization and recombination rates become equal to each other, electron and ion radial fluxes become equal to zero, and the radial component of electric field is expelled from the dust cloud.

  8. Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard

    2013-06-01

    The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant CQ ∝ |Vzz| and the asymmetry parameter ηQ that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.

  9. Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass.

    PubMed

    Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard

    2013-06-26

    The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant C(Q) is proportional to |V(zz)| and the asymmetry parameter η(Q) that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.

  10. Probability density functions for use when calculating standardised drought indices

    NASA Astrophysics Data System (ADS)

    Svensson, Cecilia; Prosdocimi, Ilaria; Hannaford, Jamie

    2015-04-01

    Time series of drought indices like the standardised precipitation index (SPI) and standardised flow index (SFI) require a statistical probability density function to be fitted to the observed (generally monthly) precipitation and river flow data. Once fitted, the quantiles are transformed to a Normal distribution with mean = 0 and standard deviation = 1. These transformed data are the SPI/SFI, which are widely used in drought studies, including for drought monitoring and early warning applications. Different distributions were fitted to rainfall and river flow data accumulated over 1, 3, 6 and 12 months for 121 catchments in the United Kingdom. These catchments represent a range of catchment characteristics in a mid-latitude climate. Both rainfall and river flow data have a lower bound at 0, as rains and flows cannot be negative. Their empirical distributions also tend to have positive skewness, and therefore the Gamma distribution has often been a natural and suitable choice for describing the data statistically. However, after transformation of the data to Normal distributions to obtain the SPIs and SFIs for the 121 catchments, the distributions are rejected in 11% and 19% of cases, respectively, by the Shapiro-Wilk test. Three-parameter distributions traditionally used in hydrological applications, such as the Pearson type 3 for rainfall and the Generalised Logistic and Generalised Extreme Value distributions for river flow, tend to make the transformed data fit better, with rejection rates of 5% or less. However, none of these three-parameter distributions have a lower bound at zero. This means that the lower tail of the fitted distribution may potentially go below zero, which would result in a lower limit to the calculated SPI and SFI values (as observations can never reach into this lower tail of the theoretical distribution). The Tweedie distribution can overcome the problems found when using either the Gamma or the above three-parameter distributions. The Tweedie is a three-parameter distribution which includes the Gamma distribution as a special case. It is bounded below at zero and has enough flexibility to fit most behaviours observed in the data. It does not always outperform the three-parameter distributions, but the rejection rates are similar. In addition, for certain parameter values the Tweedie distribution has a positive mass at zero, which means that ephemeral streams and months with zero rainfall can be modelled. It holds potential for wider application in drought studies in other climates and types of catchment.

  11. Information sensitivity functions to assess parameter information gain and identifiability of dynamical systems.

    PubMed

    Pant, Sanjay

    2018-05-01

    A new class of functions, called the 'information sensitivity functions' (ISFs), which quantify the information gain about the parameters through the measurements/observables of a dynamical system are presented. These functions can be easily computed through classical sensitivity functions alone and are based on Bayesian and information-theoretic approaches. While marginal information gain is quantified by decrease in differential entropy, correlations between arbitrary sets of parameters are assessed through mutual information. For individual parameters, these information gains are also presented as marginal posterior variances, and, to assess the effect of correlations, as conditional variances when other parameters are given. The easy to interpret ISFs can be used to (a) identify time intervals or regions in dynamical system behaviour where information about the parameters is concentrated; (b) assess the effect of measurement noise on the information gain for the parameters; (c) assess whether sufficient information in an experimental protocol (input, measurements and their frequency) is available to identify the parameters; (d) assess correlation in the posterior distribution of the parameters to identify the sets of parameters that are likely to be indistinguishable; and (e) assess identifiability problems for particular sets of parameters. © 2018 The Authors.

  12. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  13. [Data distribution and transformation in population based sampling survey of viral load in HIV positive men who have sex with men in China].

    PubMed

    Dou, Z; Chen, J; Jiang, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To understand the distribution of population viral load (PVL) data in HIV infected men who have sex with men (MSM), fit distribution function and explore the appropriate estimating parameter of PVL. Methods: The detection limit of viral load (VL) was ≤ 50 copies/ml. Box-Cox transformation and normal distribution tests were used to describe the general distribution characteristics of the original and transformed data of PVL, then the stable distribution function was fitted with test of goodness of fit. Results: The original PVL data fitted a skewed distribution with the variation coefficient of 622.24%, and had a multimodal distribution after Box-Cox transformation with optimal parameter ( λ ) of-0.11. The distribution of PVL data over the detection limit was skewed and heavy tailed when transformed by Box-Cox with optimal λ =0. By fitting the distribution function of the transformed data over the detection limit, it matched the stable distribution (SD) function ( α =1.70, β =-1.00, γ =0.78, δ =4.03). Conclusions: The original PVL data had some censored data below the detection limit, and the data over the detection limit had abnormal distribution with large degree of variation. When proportion of the censored data was large, it was inappropriate to use half-value of detection limit to replace the censored ones. The log-transformed data over the detection limit fitted the SD. The median ( M ) and inter-quartile ranger ( IQR ) of log-transformed data can be used to describe the centralized tendency and dispersion tendency of the data over the detection limit.

  14. Effects of molecular and particle scatterings on the model parameter for remote-sensing reflectance.

    PubMed

    Lee, ZhongPing; Carder, Kendall L; Du, KePing

    2004-09-01

    For optically deep waters, remote-sensing reflectance (r(rs)) is traditionally expressed as the ratio of the backscattering coefficient (b(b)) to the sum of absorption and backscattering coefficients (a + b(b)) that multiples a model parameter (g, or the so-called f'/Q). Parameter g is further expressed as a function of b(b)/(a + b(b)) (or b(b)/a) to account for its variation that is due to multiple scattering. With such an approach, the same g value will be derived for different a and b(b) values that provide the same ratio. Because g is partially a measure of the angular distribution of upwelling light, and the angular distribution from molecular scattering is quite different from that of particle scattering; g values are expected to vary with different scattering distributions even if the b(b)/a ratios are the same. In this study, after numerically demonstrating the effects of molecular and particle scatterings on the values of g, an innovative r(rs) model is developed. This new model expresses r(rs) in two separate terms: one governed by the phase function of molecular scattering and one governed by the phase function of particle scattering, with a model parameter introduced for each term. In this way the phase function effects from molecular and particle scatterings are explicitly separated and accounted for. This new model provides an analytical tool to understand and quantify the phase-function effects on r(rs), and a platform to calculate r(rs) spectrum quickly and accurately that is required for remote-sensing applications.

  15. Isothermal enthalpy relaxation of glassy 1,2,6-hexanetriol

    NASA Astrophysics Data System (ADS)

    Fransson, Å.; Bäckström, G.

    The isothermal enthalpy relaxation of glassy 1,2,6-hexanetriol has been measured at six temperatures. The relaxation time and the distribution parameters extracted from fits of the Williams-Watts relaxation function are compared with parameters obtained by other techniques and on other substances. A detailed comparison of the Williams-Watts and the Davidson-Cole relaxation functions is presented.

  16. Relaxation distribution function of intracellular dielectric zones as an indicator of tumorous transition of living cells.

    PubMed

    Thornton, B S; Hung, W T; Irving, J

    1991-01-01

    The response decay data of living cells subject to electric polarization is associated with their relaxation distribution function (RDF) and can be determined using the inverse Laplace transform method. A new polynomial, involving a series of associated Laguerre polynomials, has been used as the approximating function for evaluating the RDF, with the advantage of avoiding the usual arbitrary trial values of a particular parameter in the numerical computations. Some numerical examples are given, followed by an application to cervical tissue. It is found that the average relaxation time and the peak amplitude of the RDF exhibit higher values for tumorous cells than normal cells and might be used as parameters to differentiate them and their associated tissues.

  17. A fortran program for Monte Carlo simulation of oil-field discovery sequences

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Davis, J.C.

    1993-01-01

    We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.

  18. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  19. The Impact of Uncertain Physical Parameters on HVAC Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai

    HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less

  20. Electrostatic analyzer measurements of ionospheric thermal ion populations

    DOE PAGES

    Fernandes, P. A.; Lynch, K. A.

    2016-07-09

    Here, we define the observational parameter regime necessary for observing low-altitude ionospheric origins of high-latitude ion upflow/outflow. We present measurement challenges and identify a new analysis technique which mitigates these impediments. To probe the initiation of auroral ion upflow, it is necessary to examine the thermal ion population at 200-350 km, where typical thermal energies are tenths of eV. Interpretation of the thermal ion distribution function measurement requires removal of payload sheath and ram effects. We use a 3-D Maxwellian model to quantify how observed ionospheric parameters such as density, temperature, and flows affect in situ measurements of the thermalmore » ion distribution function. We define the viable acceptance window of a typical top-hat electrostatic analyzer in this regime and show that the instrument's energy resolution prohibits it from directly observing the shape of the particle spectra. To extract detailed information about measured particle population, we define two intermediate parameters from the measured distribution function, then use a Maxwellian model to replicate possible measured parameters for comparison to the data. Liouville's theorem and the thin-sheath approximation allow us to couple the measured and modeled intermediate parameters such that measurements inside the sheath provide information about plasma outside the sheath. We apply this technique to sounding rocket data to show that careful windowing of the data and Maxwellian models allows for extraction of the best choice of geophysical parameters. More widespread use of this analysis technique will help our community expand its observational database of the seed regions of ionospheric outflows.« less

  1. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    NASA Astrophysics Data System (ADS)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  2. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  3. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  4. Optimal startup control of a jacketed tubular reactor.

    NASA Technical Reports Server (NTRS)

    Hahn, D. R.; Fan, L. T.; Hwang, C. L.

    1971-01-01

    The optimal startup policy of a jacketed tubular reactor, in which a first-order, reversible, exothermic reaction takes place, is presented. A distributed maximum principle is presented for determining weak necessary conditions for optimality of a diffusional distributed parameter system. A numerical technique is developed for practical implementation of the distributed maximum principle. This involves the sequential solution of the state and adjoint equations, in conjunction with a functional gradient technique for iteratively improving the control function.

  5. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy.

    PubMed

    Shizgal, Bernie D

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].

  6. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Shizgal, Bernie D.

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].

  7. Weighted recalibration of the Rosetta pedotransfer model with improved estimates of hydraulic parameter distributions and summary statistics (Rosetta3)

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggen; Schaap, Marcel G.

    2017-04-01

    Pedotransfer functions (PTFs) have been widely used to predict soil hydraulic parameters in favor of expensive laboratory or field measurements. Rosetta (Schaap et al., 2001, denoted as Rosetta1) is one of many PTFs and is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method which allows the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), and their uncertainties. In this study, we present an improved set of hierarchical pedotransfer functions (Rosetta3) that unify the water retention and Ks submodels into one. Parameter uncertainty of the fit of the VG curve to the original retention data is used in the ANN calibration procedure to reduce bias of parameters predicted by the new PTF. One thousand bootstrap replicas were used to calibrate the new models compared to 60 or 100 in Rosetta1, thus allowing the uni-variate and bi-variate probability distributions of predicted parameters to be quantified in greater detail. We determined the optimal weights for VG parameters and Ks, the optimal number of hidden nodes in ANN, and the number of bootstrap replicas required for statistically stable estimates. Results show that matric potential-dependent bias was reduced significantly while root mean square error (RMSE) for water content were reduced modestly; RMSE for Ks was increased by 0.9% (H3w) to 3.3% (H5w) in the new models on log scale of Ks compared with the Rosetta1 model. It was found that estimated distributions of parameters were mildly non-Gaussian and could instead be described rather well with heavy-tailed α-stable distributions. On the other hand, arithmetic means had only a small estimation bias for most textures when compared with the mean-like "shift" parameter of the α-stable distributions. Arithmetic means and (co-)variances are therefore still recommended as summary statistics of the estimated distributions. However, it may be necessary to parameterize the distributions in different ways if the new estimates are used in stochastic analyses of vadose zone flow and transport. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code as well as additional documentation is available at: http://www.cals.arizona.edu/research/rosettav3.html.

  8. Determination of material distribution in heading process of small bimetallic bar

    NASA Astrophysics Data System (ADS)

    Presz, Wojciech; Cacko, Robert

    2018-05-01

    The electrical connectors mostly have silver contacts joined by riveting. In order to reduce costs, the core of the contact rivet can be replaced with cheaper material, e.g. copper. There is a wide range of commercially available bimetallic (silver-copper) rivets on the market for the production of contacts. Following that, new conditions in the riveting process are created because the bi-metal object is riveted. In the analyzed example, it is a small size object, which can be placed on the border of microforming. Based on the FEM modeling of the load process of bimetallic rivets with different material distributions, the desired distribution was chosen and the choice was justified. Possible material distributions were parameterized with two parameters referring to desirable distribution characteristics. The parameter: Coefficient of Mutual Interactions of Plastic Deformations and the method of its determination have been proposed. The parameter is determined based of two-parameter stress-strain curves and is a function of these parameters and the range of equivalent strains occurring in the analyzed process. The proposed method was used for the upsetting process of the bimetallic head of the electrical contact. A nomogram was established to predict the distribution of materials in the head of the rivet and the appropriate selection of a pair of materials to achieve the desired distribution.

  9. The Importance of Behavioral Thresholds and Objective Functions in Contaminant Transport Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Kang, M.; Thomson, N. R.

    2007-12-01

    The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.

  10. Optical perturbation of atoms in weak localization

    NASA Astrophysics Data System (ADS)

    Yedjour, A.

    2018-01-01

    We determine the microscopic transport parameters that are necessary to describe the diffusion process of the atomic gas in optical speckle. We use the self-consistent theory to calculate the self-energy of the atomic gas. We compute the spectral function numerically by an average over disorder realizations in terms of the Greens function. We focus mainly on the behaviour of the energy distribution of the atoms to estimate a correction to the mobility edge. Our results show that the energy distribution of the atoms locates the mobility edge position under the disorder amplitude. This behaviour changes for each disorder parameter. We conclude that the disorder amplitude potential induced modification of the energy distribution of the atoms that plays a major role for the prediction of the mobility edge.

  11. AB INITIO Molecular Dynamics Simulations on Local Structure and Electronic Properties in Liquid MgxBi1-x Alloys

    NASA Astrophysics Data System (ADS)

    Hao, Qing-Hai; You, Yu-Wei; Kong, Xiang-Shan; Liu, C. S.

    2013-03-01

    The microscopic structure and dynamics of liquid MgxBi1-x(x = 0.5, 0.6, 0.7) alloys together with pure liquid Mg and Bi metals were investigated by means of ab initio molecular dynamics simulations. We present results of structure properties including pair correlation function, structural factor, bond-angle distribution function and bond order parameter, and their composition dependence. The dynamical and electronic properties have also been studied. The structure factor and pair correlation function are in agreement with the available experimental data. The calculated bond-angle distribution function and bond order parameter suggest that the stoichiometric composition Mg3Bi2 exhibits a different local structure order compared with other concentrations, which help us understand the appearance of the minimum electronic conductivity at this composition observed in previous experiments.

  12. Distributed adaptive asymptotically consensus tracking control of uncertain Euler-Lagrange systems under directed graph condition.

    PubMed

    Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin

    2017-11-01

    In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Sulfate passivation in the lead-acid system as a capacity limiting process

    NASA Astrophysics Data System (ADS)

    Kappus, W.; Winsel, A.

    1982-10-01

    Calculations of the discharge capacity of Pb and PbO 2 electrodes as a function of various parameters are presented. They are based on the solution-precipitation mechanism for the discharge reaction and its formulation by Winsel et al. A logarithmic pore size distribution is used to fit experimental porosigrams of Pb and PbO 2 electrodes. Based on this pore size distribution the capacity is calculated as a function of current, BET surface, and porosity of the PbSO 4 diaphragm. The PbSO 4 supersaturation as the driving force of the diffusive transport is chosen as a free parameter.

  14. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  15. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  16. Distinguishing Functional DNA Words; A Method for Measuring Clustering Levels

    NASA Astrophysics Data System (ADS)

    Moghaddasi, Hanieh; Khalifeh, Khosrow; Darooneh, Amir Hossein

    2017-01-01

    Functional DNA sub-sequences and genome elements are spatially clustered through the genome just as keywords in literary texts. Therefore, some of the methods for ranking words in texts can also be used to compare different DNA sub-sequences. In analogy with the literary texts, here we claim that the distribution of distances between the successive sub-sequences (words) is q-exponential which is the distribution function in non-extensive statistical mechanics. Thus the q-parameter can be used as a measure of words clustering levels. Here, we analyzed the distribution of distances between consecutive occurrences of 16 possible dinucleotides in human chromosomes to obtain their corresponding q-parameters. We found that CG as a biologically important two-letter word concerning its methylation, has the highest clustering level. This finding shows the predicting ability of the method in biology. We also proposed that chromosome 18 with the largest value of q-parameter for promoters of genes is more sensitive to dietary and lifestyle. We extended our study to compare the genome of some selected organisms and concluded that the clustering level of CGs increases in higher evolutionary organisms compared to lower ones.

  17. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  18. Uncertainty Analysis of Simulated Hydraulic Fracturing

    NASA Astrophysics Data System (ADS)

    Chen, M.; Sun, Y.; Fu, P.; Carrigan, C. R.; Lu, Z.

    2012-12-01

    Artificial hydraulic fracturing is being used widely to stimulate production of oil, natural gas, and geothermal reservoirs with low natural permeability. Optimization of field design and operation is limited by the incomplete characterization of the reservoir, as well as the complexity of hydrological and geomechanical processes that control the fracturing. Thus, there are a variety of uncertainties associated with the pre-existing fracture distribution, rock mechanics, and hydraulic-fracture engineering that require evaluation of their impact on the optimized design. In this study, a multiple-stage scheme was employed to evaluate the uncertainty. We first define the ranges and distributions of 11 input parameters that characterize the natural fracture topology, in situ stress, geomechanical behavior of the rock matrix and joint interfaces, and pumping operation, to cover a wide spectrum of potential conditions expected for a natural reservoir. These parameters were then sampled 1,000 times in an 11-dimensional parameter space constrained by the specified ranges using the Latin-hypercube method. These 1,000 parameter sets were fed into the fracture simulators, and the outputs were used to construct three designed objective functions, i.e. fracture density, opened fracture length and area density. Using PSUADE, three response surfaces (11-dimensional) of the objective functions were developed and global sensitivity was analyzed to identify the most sensitive parameters for the objective functions representing fracture connectivity, which are critical for sweep efficiency of the recovery process. The second-stage high resolution response surfaces were constructed with dimension reduced to the number of the most sensitive parameters. An additional response surface with respect to the objective function of the fractal dimension for fracture distributions was constructed in this stage. Based on these response surfaces, comprehensive uncertainty analyses were conducted among input parameters and objective functions. In addition, reduced-order emulation models resulting from this analysis can be used for optimal control of hydraulic fracturing. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  19. Determining the parameters of Weibull function to estimate the wind power potential in conditions of limited source meteorological data

    NASA Astrophysics Data System (ADS)

    Fetisova, Yu. A.; Ermolenko, B. V.; Ermolenko, G. V.; Kiseleva, S. V.

    2017-04-01

    We studied the information basis for the assessment of wind power potential on the territory of Russia. We described the methodology to determine the parameters of the Weibull function, which reflects the density of distribution of probabilities of wind flow speeds at a defined basic height above the surface of the earth using the available data on the average speed at this height and its repetition by gradations. The application of the least square method for determining these parameters, unlike the use of graphical methods, allows performing a statistical assessment of the results of approximation of empirical histograms by the Weibull formula. On the basis of the computer-aided analysis of the statistical data, it was shown that, at a fixed point where the wind speed changes at different heights, the range of parameter variation of the Weibull distribution curve is relatively small, the sensitivity of the function to parameter changes is quite low, and the influence of changes on the shape of speed distribution curves is negligible. Taking this into consideration, we proposed and mathematically verified the methodology of determining the speed parameters of the Weibull function at other heights using the parameter computations for this function at a basic height, which is known or defined by the average speed of wind flow, or the roughness coefficient of the geological substrate. We gave examples of practical application of the suggested methodology in the development of the Atlas of Renewable Energy Resources in Russia in conditions of deficiency of source meteorological data. The proposed methodology, to some extent, may solve the problem related to the lack of information on the vertical profile of repeatability of the wind flow speeds in the presence of a wide assortment of wind turbines with different ranges of wind-wheel axis heights and various performance characteristics in the global market; as a result, this methodology can become a powerful tool for effective selection of equipment in the process of designing a power supply system in a certain location.

  20. Alternative Approaches to Evaluation in Empirical Microeconomics

    ERIC Educational Resources Information Center

    Blundell, Richard; Dias, Monica Costa

    2009-01-01

    This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…

  1. Comparison of various error functions in predicting the optimum isotherm by linear and non-linear regression analysis for the sorption of basic red 9 by activated carbon.

    PubMed

    Kumar, K Vasanth; Porkodi, K; Rocha, F

    2008-01-15

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.

  2. Non-classicality criteria: Glauber-Sudarshan P function and Mandel ? parameter

    NASA Astrophysics Data System (ADS)

    Alexanian, Moorad

    2018-01-01

    We calculate exactly the quantum mechanical, temporal Wigner quasiprobability density for a single-mode, degenerate parametric amplifier for a system in the Gaussian state, viz., a displaced-squeezed thermal state. The Wigner function allows us to calculate the fluctuations in photon number and the quadrature variance. We contrast the difference between the non-classicality criteria, which is independent of the displacement parameter ?, based on the Glauber-Sudarshan quasiprobability distribution ? and the classical/non-classical behaviour of the Mandel ? parameter, which depends strongly on ?. We find a phase transition as a function of ? such that at the critical point ?, ?, as a function of ?, goes from strictly classical, for ?, to a mixed classical/non-classical behaviour, for ?.

  3. Doubly stochastic radial basis function methods

    NASA Astrophysics Data System (ADS)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  4. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel II. Distribution functions and moments.

    PubMed

    Langenbucher, Frieder

    2003-01-01

    MS Excel is a useful tool to handle in vitro/in vivo correlation (IVIVC) distribution functions, with emphasis on the Weibull and the biexponential distribution, which are most useful for the presentation of cumulative profiles, e.g. release in vitro or urinary excretion in vivo, and differential profiles such as the plasma response in vivo. The discussion includes moments (AUC and mean) as summarizing statistics, and data-fitting algorithms for parameter estimation.

  5. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    USGS Publications Warehouse

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  6. Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California

    USGS Publications Warehouse

    Parsons, T.

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  7. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  8. Characterizing the Spatial Density Functions of Neural Arbors

    NASA Astrophysics Data System (ADS)

    Teeter, Corinne Michelle

    Recently, it has been proposed that a universal function describes the way in which all arbors (axons and dendrites) spread their branches over space. Data from fish retinal ganglion cells as well as cortical and hippocampal arbors from mouse, rat, cat, monkey and human provide evidence that all arbor density functions (adf) can be described by a Gaussian function truncated at approximately two standard deviations. A Gaussian density function implies that there is a minimal set of parameters needed to describe an adf: two or three standard deviations (depending on the dimensionality of the arbor) and an amplitude. However, the parameters needed to completely describe an adf could be further constrained by a scaling law found between the product of the standard deviations and the amplitude of the function. In the following document, I examine the scaling law relationship in order to determine the minimal set of parameters needed to describe an adf. First, I find that the at, two-dimensional arbors of fish retinal ganglion cells require only two out of the three fundamental parameters to completely describe their density functions. Second, the three-dimensional, volume filling, cortical arbors require four fundamental parameters: three standard deviations and the total length of an arbor (which corresponds to the amplitude of the function). Next, I characterize the shape of arbors in the context of the fundamental parameters. I show that the parameter distributions of the fish retinal ganglion cells are largely homogenous. In general, axons are bigger and less dense than dendrites; however, they are similarly shaped. The parameter distributions of these two arbor types overlap and, therefore, can only be differentiated from one another probabilistically based on their adfs. Despite artifacts in the cortical arbor data, different types of arbors (apical dendrites, non-apical dendrites, and axons) can generally be differentiated based on their adfs. In addition, within arbor type, there is evidence of different neuron classes (such as interneurons and pyramidal cells). How well different types and classes of arbors can be differentiated is quantified using the Random ForestTM supervised learning algorithm.

  9. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  10. An empirical study on the utility of BRDF model parameters and topographic parameters for mapping vegetation in a semi-arid region with MISR imagery

    USDA-ARS?s Scientific Manuscript database

    Multi-angle remote sensing has been proved useful for mapping vegetation community types in desert regions. Based on Multi-angle Imaging Spectro-Radiometer (MISR) multi-angular images, this study compares roles played by Bidirectional Reflectance Distribution Function (BRDF) model parameters with th...

  11. Ventilation-perfusion distribution in normal subjects.

    PubMed

    Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A

    2012-09-01

    Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.

  12. Implications inferred from anisotropy parameter of proton distributions related to EMIC waves in the inner magnetosphere.

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Lee, D. Y.

    2017-12-01

    In the classic theory of wave-particle resonant interaction, anisotropy parameter of proton distribution is considered as an important factor to determine an instability such as ion cyclotron instability. The particle distribution function is often assumed to be a bi-Maxwellian distribution, for which the anisotropy parameter can be simplified to temperature anisotropy (T⊥/T∥-1) independent of specific energy of particles. In this paper, we studied the proton anisotropy related to EMIC waves using the Van Allen Probes observations in the inner magnetosphere. First, we found that the real velocity distribution of protons is usually not expressed with a simple bi-Maxwellian distribution. Also, we calculated the anisotropy parameter using the exact formula defined by Kennel and Petschek [1966] and investigated the linear instability criterion of them. We found that, for majority of the EMIC wave events, the threshold anisotropy condition for proton cyclotron instability is satisfied in the expected range of resonant energy. We further determined the parallel plasma beta and its inverse relationship with the anisotropy parameter. The inverse relationship exists both during the EMIC wave times and non-EMIC wave times, but with different slopes. Based on this result, we demonstrate that the parallel plasma beta can be a critical factor that determines occurrence of EMIC waves.

  13. Dynamic gadoxetate-enhanced MRI for the assessment of total and segmental liver function and volume in primary sclerosing cholangitis.

    PubMed

    Nilsson, Henrik; Blomqvist, Lennart; Douglas, Lena; Nordell, Anders; Jacobsson, Hans; Hagen, Karin; Bergquist, Annika; Jonas, Eduard

    2014-04-01

    To evaluate dynamic hepatocyte-specific contrast-enhanced MRI (DHCE-MRI) for the assessment of global and segmental liver volume and function in patients with primary sclerosing cholangitis (PSC), and to explore the heterogeneous distribution of liver function in this patient group. Twelve patients with primary sclerosing cholangitis (PSC) and 20 healthy volunteers were examined using DHCE-MRI with Gd-EOB-DTPA. Segmental and total liver volume were calculated, and functional parameters (hepatic extraction fraction [HEF], input relative blood-flow [irBF], and mean transit time [MTT]) were calculated in each liver voxel using deconvolutional analysis. In each study subject, and incongruence score (IS) was constructed to describe the mismatch between segmental function and volume. Among patients, the liver function parameters were correlated to bile duct obstruction and to established scoring models for liver disease. Liver function was significantly more heterogeneously distributed in the patient group (IS 1.0 versus 0.4). There were significant correlations between biliary obstruction and segmental functional parameters (HEF rho -0.24; irBF rho -0.45), and the Mayo risk score correlated significantly with the total liver extraction capacity of Gd-EOB-DTPA (rho -0.85). The study demonstrates a new method to quantify total and segmental liver function using DHCE-MRI in patients with PSC. Copyright © 2013 Wiley Periodicals, Inc.

  14. SPOTting Model Parameters Using a Ready-Made Python Package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2017-04-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  15. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  16. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  17. Bayesian inference of galaxy formation from the K-band luminosity function of galaxies: tensions between theory and observation

    NASA Astrophysics Data System (ADS)

    Lu, Yu; Mo, H. J.; Katz, Neal; Weinberg, Martin D.

    2012-04-01

    We conduct Bayesian model inferences from the observed K-band luminosity function of galaxies in the local Universe, using the semi-analytic model (SAM) of galaxy formation introduced in Lu et al. The prior distributions for the 14 free parameters include a large range of possible models. We find that some of the free parameters, e.g. the characteristic scales for quenching star formation in both high-mass and low-mass haloes, are already tightly constrained by the single data set. The posterior distribution includes the model parameters adopted in other SAMs. By marginalizing over the posterior distribution, we make predictions that include the full inferential uncertainties for the colour-magnitude relation, the Tully-Fisher relation, the conditional stellar mass function of galaxies in haloes of different masses, the H I mass function, the redshift evolution of the stellar mass function of galaxies and the global star formation history. Using posterior predictive checking with the available observational results, we find that the model family (i) predicts a Tully-Fisher relation that is curved; (ii) significantly overpredicts the satellite fraction; (iii) vastly overpredicts the H I mass function; (iv) predicts high-z stellar mass functions that have too many low-mass galaxies and too few high-mass ones and (v) predicts a redshift evolution of the stellar mass density and the star formation history that are in moderate disagreement. These results suggest that some important processes are still missing in the current model family, and we discuss a number of possible solutions to solve the discrepancies, such as interactions between galaxies and dark matter haloes, tidal stripping, the bimodal accretion of gas, preheating and a redshift-dependent initial mass function.

  18. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  19. C -parameter distribution at N 3 LL ' including power corrections

    DOE PAGES

    Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; ...

    2015-05-15

    We compute the e⁺e⁻ C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(α 3 s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments Ω n. To eliminate an O(Λ QCD) renormalon ambiguity in themore » soft function, we switch from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter Ω 1. We show how to simultaneously account for running effects in Ω 1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for αs(m Z) and Ω 1, the perturbative uncertainty in our cross section is ≅ 2.5% at Q=m Z.« less

  20. Constraining the double gluon distribution by the single gluon distribution

    DOE PAGES

    Golec-Biernat, Krzysztof; Lewandowska, Emilia; Serino, Mirko; ...

    2015-10-03

    We show how to consistently construct initial conditions for the QCD evolution equations for double parton distribution functions in the pure gluon case. We use to momentum sum rule for this purpose and a specific form of the known single gluon distribution function in the MSTW parameterization. The resulting double gluon distribution satisfies exactly the momentum sum rule and is parameter free. Furthermore, we study numerically its evolution with a hard scale and show the approximate factorization into product of two single gluon distributions at small values of x, whereas at large values of x the factorization is always violatedmore » in agreement with the sum rule.« less

  1. Investigation of pore size and energy distributions by statistical physics formalism applied to agriculture products

    NASA Astrophysics Data System (ADS)

    Aouaini, Fatma; Knani, Salah; Yahia, Manel Ben; Bahloul, Neila; Ben Lamine, Abdelmottaleb; Kechaou, Nabil

    2015-12-01

    In this paper, we present a new investigation that allows determining the pore size distribution (PSD) in a porous medium. This PSD is achieved by using the desorption isotherms of four varieties of olive leaves. This is by the means of statistical physics formalism and Kelvin's law. The results are compared with those obtained with scanning electron microscopy. The effect of temperature on the distribution function of pores has been studied. The influence of each parameter on the PSD is interpreted. A similar function of adsorption energy distribution, AED, is deduced from the PSD.

  2. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  3. Pedestrian simulation and distribution in urban space based on visibility analysis and agent simulation

    NASA Astrophysics Data System (ADS)

    Ying, Shen; Li, Lin; Gao, Yurong

    2009-10-01

    Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.

  4. An Empirical Bayes Approach to Mantel-Haenszel DIF Analysis.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Thayer, Dorothy T.; Lewis, Charles

    1999-01-01

    Developed an empirical Bayes enhancement to Mantel-Haenszel (MH) analysis of differential item functioning (DIF) in which it is assumed that the MH statistics are normally distributed and that the prior distribution of underlying DIF parameters is also normal. (Author/SLD)

  5. Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chhaiba, Hassan, E-mail: chhaiba.hassan@gmail.com; Demni, Nizar, E-mail: nizar.demni@univ-rennes1.fr; Mouayn, Zouhair, E-mail: mouayn@fstbm.ac.ma

    2016-07-15

    To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering themore » total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.« less

  6. Electronic structure, dielectric response, and surface charge distribution of RGD (1FUV) peptide.

    PubMed

    Adhikari, Puja; Wen, Amy M; French, Roger H; Parsegian, V Adrian; Steinmetz, Nicole F; Podgornik, Rudolf; Ching, Wai-Yim

    2014-07-08

    Long and short range molecular interactions govern molecular recognition and self-assembly of biological macromolecules. Microscopic parameters in the theories of these molecular interactions are either phenomenological or need to be calculated within a microscopic theory. We report a unified methodology for the ab initio quantum mechanical (QM) calculation that yields all the microscopic parameters, namely the partial charges as well as the frequency-dependent dielectric response function, that can then be taken as input for macroscopic theories of electrostatic, polar, and van der Waals-London dispersion intermolecular forces. We apply this methodology to obtain the electronic structure of the cyclic tripeptide RGD-4C (1FUV). This ab initio unified methodology yields the relevant parameters entering the long range interactions of biological macromolecules, providing accurate data for the partial charge distribution and the frequency-dependent dielectric response function of this peptide. These microscopic parameters determine the range and strength of the intricate intermolecular interactions between potential docking sites of the RGD-4C ligand and its integrin receptor.

  7. Evaluating the assumption of power-law late time scaling of breakthrough curves in highly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele

    2017-04-01

    Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.

  8. Observations of the directional distribution of the wind energy input function over swell waves

    NASA Astrophysics Data System (ADS)

    Shabani, Behnam; Babanin, Alex V.; Baldock, Tom E.

    2016-02-01

    Field measurements of wind stress over shallow water swell traveling in different directions relative to the wind are presented. The directional distribution of the measured stresses is used to confirm the previously proposed but unverified directional distribution of the wind energy input function. The observed wind energy input function is found to follow a much narrower distribution (β∝cos⁡3.6θ) than the Plant (1982) cosine distribution. The observation of negative stress angles at large wind-wave angles, however, indicates that the onset of negative wind shearing occurs at about θ≈ 50°, and supports the use of the Snyder et al. (1981) directional distribution. Taking into account the reverse momentum transfer from swell to the wind, Snyder's proposed parameterization is found to perform exceptionally well in explaining the observed narrow directional distribution of the wind energy input function, and predicting the wind drag coefficients. The empirical coefficient (ɛ) in Snyder's parameterization is hypothesised to be a function of the wave shape parameter, with ɛ value increasing as the wave shape changes between sinusoidal, sawtooth, and sharp-crested shoaling waves.

  9. Consistent Parameter and Transfer Function Estimation using Context Free Grammars

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a search space for equations. The parametrization of the transfer functions is then achieved through a second optimization routine. The contribution explores different aspects of the described procedure through a set of experiments. These experiments can be divided into three categories: (1) The inference of transfer functions from directly measurable parameters; (2) The estimation of global parameters for given transfer functions from runoff data; and (3) The estimation of sets of completely unknown transfer functions from runoff data. The conducted tests reveal different potentials and limits of the procedure. In concrete it is shown that example (1) and (2) work remarkably well. Example (3) is much more dependent on the setup. In general, it can be said that in that case much more data is needed to derive transfer function estimations, even for simple models and setups. References: - Chomsky, N. (1956): Three Models for the Description of Language. IT IRETr. 2(3), p 113-124 - O'Neil, M. (2001): Grammatical Evolution. IEEE ToEC, Vol.5, No. 4 - Samaniego, L.; Kumar, R.; Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale. WWR, Vol. 46, W05523, doi:10.1029/2008WR007327

  10. Cumulative Poisson Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  11. Measuring Spatial Accessibility of Health Care Providers – Introduction of a Variable Distance Decay Function within the Floating Catchment Area (FCA) Method

    PubMed Central

    Groneberg, David A.

    2016-01-01

    We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marekova, Elisaveta

    Series of relatively large earthquakes in different regions of the Earth are studied. The regions chooses are of a high seismic activity and has a good contemporary network for recording of the seismic events along them. The main purpose of this investigation is the attempt to describe analytically the seismic process in the space and time. We are considering the statistical distributions the distances and the times between consecutive earthquakes (so called pair analysis). Studies conducted on approximating the statistical distribution of the parameters of consecutive seismic events indicate the existence of characteristic functions that describe them best. Such amore » mathematical description allows the distributions of the examined parameters to be compared to other model distributions.« less

  13. The Effect of Roughness Model on Scattering Properties of Ice Crystals.

    NASA Technical Reports Server (NTRS)

    Geogdzhayev, Igor V.; Van Diedenhoven, Bastiaan

    2016-01-01

    We compare stochastic models of microscale surface roughness assuming uniform and Weibull distributions of crystal facet tilt angles to calculate scattering by roughened hexagonal ice crystals using the geometric optics (GO) approximation. Both distributions are determined by similar roughness parameters, while the Weibull model depends on the additional shape parameter. Calculations were performed for two visible wavelengths (864 nm and 410 nm) for roughness values between 0.2 and 0.7 and Weibull shape parameters between 0 and 1.0 for crystals with aspect ratios of 0.21, 1 and 4.8. For this range of parameters we find that, for a given roughness level, varying the Weibull shape parameter can change the asymmetry parameter by up to about 0.05. The largest effect of the shape parameter variation on the phase function is found in the backscattering region, while the degree of linear polarization is most affected at the side-scattering angles. For high roughness, scattering properties calculated using the uniform and Weibull models are in relatively close agreement for a given roughness parameter, especially when a Weibull shape parameter of 0.75 is used. For smaller roughness values, a shape parameter close to unity provides a better agreement. Notable differences are observed in the phase function over the scattering angle range from 5deg to 20deg, where the uniform roughness model produces a plateau while the Weibull model does not.

  14. Generation of uniformly distributed dose points for anatomy-based three-dimensional dose optimization methods in brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N

    2000-05-01

    We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.

  15. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  16. Optimization of design and operating parameters of a space-based optical-electronic system with a distributed aperture.

    PubMed

    Tcherniavski, Iouri; Kahrizi, Mojtaba

    2008-11-20

    Using a gradient optimization method with objective functions formulated in terms of a signal-to-noise ratio (SNR) calculated at given values of the prescribed spatial ground resolution, optimization problems of geometrical parameters of a distributed optical system and a charge-coupled device of a space-based optical-electronic system are solved for samples of the optical systems consisting of two and three annular subapertures. The modulation transfer function (MTF) of the distributed aperture is expressed in terms of an average MTF taking residual image alignment (IA) and optical path difference (OPD) errors into account. The results show optimal solutions of the optimization problems depending on diverse variable parameters. The information on the magnitudes of the SNR can be used to determine the number of the subapertures and their sizes, while the information on the SNR decrease depending on the IA and OPD errors can be useful in design of a beam combination control system to produce the necessary requirements to its accuracy on the basis of the permissible deterioration in the image quality.

  17. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  18. Robust automatic control system of vessel descent-rise device for plant with distributed parameters “cable – towed underwater vehicle”

    NASA Astrophysics Data System (ADS)

    Chupina, K. V.; Kataev, E. V.; Khannanov, A. M.; Korshunov, V. N.; Sennikov, I. A.

    2018-05-01

    The paper is devoted to a problem of synthesis of the robust control system for a distributed parameters plant. The vessel descent-rise device has a heave compensation function for stabilization of the towed underwater vehicle on a set depth. A sea state code, parameters of the underwater vehicle and cable vary during underwater operations, the vessel heave is a stochastic process. It means that the plant and external disturbances have uncertainty. That is why it is necessary to use the robust theory for synthesis of an automatic control system, but without use of traditional methods of optimization, because this cable has distributed parameters. The offered technique has allowed one to design an effective control system for stabilization of immersion depth of the towed underwater vehicle for various degrees of sea roughness and to provide its robustness to deviations of parameters of the vehicle and cable’s length.

  19. On the latitudinal distribution of Titan's haze at the Voyager epoch

    NASA Astrophysics Data System (ADS)

    Negrao, A.; Roos-Serote, M.; Rannou, P.; Rages, K.; McKay, C.

    2002-09-01

    In this work, we re-analyse a total of 10 high phase angle images of Titan (2 from Voyager 1 and 8 from Voyager 2). The images were acquired in different filters of the Voyager Imaging Sub System in 1980 - 1981. We apply a model, developed and used by Rannou etal. (1997) and Cabane etal. (1992), that calculates the vertical (1-D) distribution of haze particles and the I/F radial profiles as a function of a series of parameters. Two of these parameters, the haze particle production rate (P) and imaginary refractive index (xk), are used to obtain fits to the observed I/F profiles at different latitudes. Differerent from previous studies is that we consider all filters simultaneously, in an attempt to better fix the parameter values. We also include the filter response functions, not considered previously. The results show that P does not change significantly as a function of latitude, eventhough somewhat lower values are found at high northern latitudes. xk seems to increase towards southern latitudes. We will compare our results with GCM runs, that can give the haze distribution at the epoch of the observations. Work financed by portuguese Foundation for Science and Tecnology (FCT), contract ESO/PRO/40157/2000

  20. Beam energy dependence of pseudorapidity distributions of charged particles produced in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Basu, Sumit; Nayak, Tapan K.; Datta, Kaustuv

    2016-06-01

    Heavy-ion collisions at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory and the Large Hadron Collider at CERN probe matter at extreme conditions of temperature and energy density. Most of the global properties of the collisions can be extracted from the measurements of charged-particle multiplicity and pseudorapidity (η ) distributions. We have shown that the available experimental data on beam energy and centrality dependence of η distributions in heavy-ion (Au +Au or Pb +Pb ) collisions from √{sNN}=7.7 GeV to 2.76 TeV are reasonably well described by the AMPT model, which is used for further exploration. The nature of the η distributions has been described by a double Gaussian function using a set of fit parameters, which exhibit a regular pattern as a function of beam energy. By extrapolating the parameters to a higher energy of √{sNN}=5.02 TeV, we have obtained the charged-particle multiplicity densities, η distributions, and energy densities for various centralities. Incidentally, these results match well with some of the recently published data by the ALICE Collaboration.

  1. Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    PubMed

    Morimura, Tetsuro; Uchibe, Eiji; Yoshimoto, Junichiro; Peters, Jan; Doya, Kenji

    2010-02-01

    Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omission can be reduced by setting the forgetting rate gamma for the value functions close to 1, these algorithms do not permit gamma to be set exactly at gamma = 1. In this article, we propose a method for estimating the log stationary state distribution derivative (LSD) as a useful form of the derivative of the stationary state distribution through backward Markov chain formulation and a temporal difference learning framework. A new policy gradient (PG) framework with an LSD is also proposed, in which the average reward gradient can be estimated by setting gamma = 0, so it becomes unnecessary to learn the value functions. We also test the performance of the proposed algorithms using simple benchmark tasks and show that these can improve the performances of existing PG methods.

  2. Decoupled ARX and RBF Neural Network Modeling Using PCA and GA Optimization for Nonlinear Distributed Parameter Systems.

    PubMed

    Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing

    2018-02-01

    Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.

  3. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  4. Constraining the noise-free distribution of halo spin parameters

    NASA Astrophysics Data System (ADS)

    Benson, Andrew J.

    2017-11-01

    Any measurement made using an N-body simulation is subject to noise due to the finite number of particles used to sample the dark matter distribution function, and the lack of structure below the simulation resolution. This noise can be particularly significant when attempting to measure intrinsically small quantities, such as halo spin. In this work, we develop a model to describe the effects of particle noise on halo spin parameters. This model is calibrated using N-body simulations in which the particle noise can be treated as a Poisson process on the underlying dark matter distribution function, and we demonstrate that this calibrated model reproduces measurements of halo spin parameter error distributions previously measured in N-body convergence studies. Utilizing this model, along with previous measurements of the distribution of halo spin parameters in N-body simulations, we place constraints on the noise-free distribution of halo spins. We find that the noise-free median spin is 3 per cent lower than that measured directly from the N-body simulation, corresponding to a shift of approximately 40 times the statistical uncertainty in this measurement arising purely from halo counting statistics. We also show that measurement of the spin of an individual halo to 10 per cent precision requires at least 4 × 104 particles in the halo - for haloes containing 200 particles, the fractional error on spins measured for individual haloes is of order unity. N-body simulations should be viewed as the results of a statistical experiment applied to a model of dark matter structure formation. When viewed in this way, it is clear that determination of any quantity from such a simulation should be made through forward modelling of the effects of particle noise.

  5. Integrated hydraulic and organophosphate pesticide injection simulations for enhancing event detection in water distribution systems.

    PubMed

    Schwartz, Rafi; Lahav, Ori; Ostfeld, Avi

    2014-10-15

    As a complementary step towards solving the general event detection problem of water distribution systems, injection of the organophosphate pesticides, chlorpyrifos (CP) and parathion (PA), were simulated at various locations within example networks and hydraulic parameters were calculated over 24-h duration. The uniqueness of this study is that the chemical reactions and byproducts of the contaminants' oxidation were also simulated, as well as other indicative water quality parameters such as alkalinity, acidity, pH and the total concentration of free chlorine species. The information on the change in water quality parameters induced by the contaminant injection may facilitate on-line detection of an actual event involving this specific substance and pave the way to development of a generic methodology for detecting events involving introduction of pesticides into water distribution systems. Simulation of the contaminant injection was performed at several nodes within two different networks. For each injection, concentrations of the relevant contaminants' mother and daughter species, free chlorine species and water quality parameters, were simulated at nodes downstream of the injection location. The results indicate that injection of these substances can be detected at certain conditions by a very rapid drop in Cl2, functioning as the indicative parameter, as well as a drop in alkalinity concentration and a small decrease in pH, both functioning as supporting parameters, whose usage may reduce false positive alarms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  7. Star formation history: Modeling of visual binaries

    NASA Astrophysics Data System (ADS)

    Gebrehiwot, Y. M.; Tessema, S. B.; Malkov, O. Yu.; Kovaleva, D. A.; Sytov, A. Yu.; Tutukov, A. V.

    2018-05-01

    Most stars form in binary or multiple systems. Their evolution is defined by masses of components, orbital separation and eccentricity. In order to understand star formation and evolutionary processes, it is vital to find distributions of physical parameters of binaries. We have carried out Monte Carlo simulations in which we simulate different pairing scenarios: random pairing, primary-constrained pairing, split-core pairing, and total and primary pairing in order to get distributions of binaries over physical parameters at birth. Next, for comparison with observations, we account for stellar evolution and selection effects. Brightness, radius, temperature, and other parameters of components are assigned or calculated according to approximate relations for stars in different evolutionary stages (main-sequence stars, red giants, white dwarfs, relativistic objects). Evolutionary stage is defined as a function of system age and component masses. We compare our results with the observed IMF, binarity rate, and binary mass-ratio distributions for field visual binaries to find initial distributions and pairing scenarios that produce observed distributions.

  8. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  9. The Cluster Variation Method: A Primer for Neuroscientists.

    PubMed

    Maren, Alianna J

    2016-09-30

    Effective Brain-Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables , is defined in terms of a single interaction enthalpy parameter ( h ) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.

  10. Derivation of low flow frequency distributions under human activities and its implications

    NASA Astrophysics Data System (ADS)

    Gao, Shida; Liu, Pan; Pan, Zhengke; Ming, Bo; Guo, Shenglian; Xiong, Lihua

    2017-06-01

    Low flow, refers to a minimum streamflow in dry seasons, is crucial to water supply, agricultural irrigation and navigation. Human activities, such as groundwater pumping, influence low flow severely. In order to derive the low flow frequency distribution functions under human activities, this study incorporates groundwater pumping and return flow as variables in the recession process. Steps are as follows: (1) the original low flow without human activities is assumed to follow a Pearson type three distribution, (2) the probability distribution of climatic dry spell periods is derived based on a base flow recession model, (3) the base flow recession model is updated under human activities, and (4) the low flow distribution under human activities is obtained based on the derived probability distribution of dry spell periods and the updated base flow recession model. Linear and nonlinear reservoir models are used to describe the base flow recession, respectively. The Wudinghe basin is chosen for the case study, with daily streamflow observations during 1958-2000. Results show that human activities change the location parameter of the low flow frequency curve for the linear reservoir model, while alter the frequency distribution function for the nonlinear one. It is indicated that alter the parameters of the low flow frequency distribution is not always feasible to tackle the changing environment.

  11. The Cluster Variation Method: A Primer for Neuroscientists

    PubMed Central

    Maren, Alianna J.

    2016-01-01

    Effective Brain–Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found. PMID:27706022

  12. Bidirectional reflectance distribution function based surface modeling of non-Lambertian using intensity data of light detection and ranging.

    PubMed

    Li, Xiaolu; Liang, Yu; Xu, Lijun

    2014-09-01

    To provide a credible model for light detection and ranging (LiDAR) target classification, the focus of this study is on the relationship between intensity data of LiDAR and the bidirectional reflectance distribution function (BRDF). An integration method based on the built-in-lab coaxial laser detection system was advanced. A kind of intermediary BRDF model advanced by Schlick was introduced into the integration method, considering diffuse and specular backscattering characteristics of the surface. A group of measurement campaigns were carried out to investigate the influence of the incident angle and detection range on the measured intensity data. Two extracted parameters r and S(λ) are influenced by different surface features, which illustrate the surface features of the distribution and magnitude of reflected energy, respectively. The combination of two parameters can be used to describe the surface characteristics for target classification in a more plausible way.

  13. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  14. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  15. Determination of hyporheic travel time distributions and other parameters from concurrent conservative and reactive tracer tests by local-in-global optimization

    NASA Astrophysics Data System (ADS)

    Knapp, Julia L. A.; Cirpka, Olaf A.

    2017-06-01

    The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.

  16. Continued development of a detailed model of arc discharge dynamics

    NASA Technical Reports Server (NTRS)

    Beers, B. L.; Pine, V. W.; Ives, S. T.

    1982-01-01

    Using a previously developed set of codes (SEMC, CASCAD, ACORN), a parametric study was performed to quantify the parameters which describe the development of a single electron indicated avalanche into a negative tip streamer. The electron distribution function in Teflon is presented for values of the electric field in the range of four-hundred million volts/meter to four billon volts/meter. A formulation of the scattering parameters is developed which shows that the transport can be represented by three independent variables. The distribution of ionization sites is used to indicate an avalanche. The self consistent evolution of the avalanche is computed over the parameter range of scattering set.

  17. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  18. New approach in the quantum statistical parton distribution

    NASA Astrophysics Data System (ADS)

    Sohaily, Sozha; Vaziri (Khamedi), Mohammad

    2017-12-01

    An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.

  19. Inferring the distribution of mutational effects on fitness in Drosophila.

    PubMed

    Loewe, Laurence; Charlesworth, Brian

    2006-09-22

    The properties of the distribution of deleterious mutational effects on fitness (DDME) are of fundamental importance for evolutionary genetics. Since it is extremely difficult to determine the nature of this distribution, several methods using various assumptions about the DDME have been developed, for the purpose of parameter estimation. We apply a newly developed method to DNA sequence polymorphism data from two Drosophila species and compare estimates of the parameters of the distribution of the heterozygous fitness effects of amino acid mutations for several different distribution functions. The results exclude normal and gamma distributions, since these predict too few effectively lethal mutations and power-law distributions as a result of predicting too many lethals. Only the lognormal distribution appears to fit both the diversity data and the frequency of lethals. This DDME arises naturally in complex systems when independent factors contribute multiplicatively to an increase in fitness-reducing damage. Several important parameters, such as the fraction of effectively neutral non-synonymous mutations and the harmonic mean of non-neutral selection coefficients, are robust to the form of the DDME. Our results suggest that the majority of non-synonymous mutations in Drosophila are under effective purifying selection.

  20. Nucleon form factors in generalized parton distributions at high momentum transfers

    NASA Astrophysics Data System (ADS)

    Sattary Nikkhoo, Negin; Shojaei, Mohammad Reza

    2018-05-01

    This paper aims at calculating the elastic form factors for a nucleon by considering the extended Regge and modified Gaussian ansatzes based on the generalized parton distributions. To reach this goal, we have considered three different parton distribution functions (PDFs) and have compared the obtained results among them for high momentum transfer ranges. Minimum free parameters have been applied in our parametrization. After achieving the form factors, we calculate the electric radius and the transversely unpolarized and polarized densities for the nucleon. Furthermore, we obtain the impact-parameter-dependent PDFs. Finally, we compare our obtained data with the results of previous studies.

  1. A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, T.; Luo, H.

    2013-12-01

    On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.

  2. Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM

    NASA Technical Reports Server (NTRS)

    Crane, Robert G.; Hewitson, B. C.

    1991-01-01

    A new diagnostic tool is developed for examining relationships between the synoptic scale circulation and regional temperature distributions in GCMs. The 4 x 5 deg GISS GCM is shown to produce accurate simulations of the variance in the synoptic scale sea level pressure distribution over the U.S. An analysis of the observational data set from the National Meteorological Center (NMC) also shows a strong relationship between the synoptic circulation and grid point temperatures. This relationship is demonstrated by deriving transfer functions between a time-series of circulation parameters and temperatures at individual grid points. The circulation parameters are derived using rotated principal components analysis, and the temperature transfer functions are based on multivariate polynomial regression models. The application of these transfer functions to the GCM circulation indicates that there is considerable spatial bias present in the GCM temperature distributions. The transfer functions are also used to indicate the possible changes in U.S. regional temperatures that could result from differences in synoptic scale circulation between a 1XCO2 and a 2xCO2 climate, using a doubled CO2 version of the same GISS GCM.

  3. Identification of cloud fields by the nonparametric algorithm of pattern recognition from normalized video data recorded with the AVHRR instrument

    NASA Astrophysics Data System (ADS)

    Protasov, Konstantin T.; Pushkareva, Tatyana Y.; Artamonov, Evgeny S.

    2002-02-01

    The problem of cloud field recognition from the NOAA satellite data is urgent for solving not only meteorological problems but also for resource-ecological monitoring of the Earth's underlying surface associated with the detection of thunderstorm clouds, estimation of the liquid water content of clouds and the moisture of the soil, the degree of fire hazard, etc. To solve these problems, we used the AVHRR/NOAA video data that regularly displayed the situation in the territory. The complexity and extremely nonstationary character of problems to be solved call for the use of information of all spectral channels, mathematical apparatus of testing statistical hypotheses, and methods of pattern recognition and identification of the informative parameters. For a class of detection and pattern recognition problems, the average risk functional is a natural criterion for the quality and the information content of the synthesized decision rules. In this case, to solve efficiently the problem of identifying cloud field types, the informative parameters must be determined by minimization of this functional. Since the conditional probability density functions, representing mathematical models of stochastic patterns, are unknown, the problem of nonparametric reconstruction of distributions from the leaning samples arises. To this end, we used nonparametric estimates of distributions with the modified Epanechnikov kernel. The unknown parameters of these distributions were determined by minimization of the risk functional, which for the learning sample was substituted by the empirical risk. After the conditional probability density functions had been reconstructed for the examined hypotheses, a cloudiness type was identified using the Bayes decision rule.

  4. Description of waves in inhomogeneous domains using Heun's equation

    NASA Astrophysics Data System (ADS)

    Bednarik, M.; Cervenka, M.

    2018-04-01

    There are a number of model equations describing electromagnetic, acoustic or quantum waves in inhomogeneous domains and some of them are of the same type from the mathematical point of view. This isomorphism enables us to use a unified approach to solving the corresponding equations. In this paper, the inhomogeneity is represented by a trigonometric spatial distribution of a parameter determining the properties of an inhomogeneous domain. From the point of view of modeling, this trigonometric parameter function can be smoothly connected to neighboring constant-parameter regions. For this type of distribution, exact local solutions of the model equations are represented by the local Heun functions. As the interval for which the solution is sought includes two regular singular points. For this reason, a method is proposed which resolves this problem only based on the local Heun functions. Further, the transfer matrix for the considered inhomogeneous domain is determined by means of the proposed method. As an example of the applicability of the presented solutions the transmission coefficient is calculated for the locally periodic structure which is given by an array of asymmetric barriers.

  5. Structure factor and radial distribution function of some liquid lanthanides using charged hard sphere

    NASA Astrophysics Data System (ADS)

    Patel, H. P.; Sonvane, Y. A.; Thakor, P. B.

    2017-05-01

    The structure factor S(q) and radial distribution function g(r) play vital role to study the various structural properties like electronic, dynamic, magnetic etc. The present paper deals with the structural studies of foresaid properties using our newly constructed parameter free model potential with the Charged Hard Sphere (CHS) approximation. The local field correction due to Sarkar et al. is used to incorporate exchange and correlation among the conduction electrons in dielectric screening. Here we report the S(q) and g(r) for some liquid lanthanides viz: La, Ce, Pr, Nd and Eu. Present computed results are compared with the available experimental data. Lastly we found that our parameter free model potential successfully explains the structural propertiesof4fliquidlanthanides.

  6. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  7. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    USDA-ARS?s Scientific Manuscript database

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  8. Procedures for the design of low-pitching-moment airfoils

    NASA Technical Reports Server (NTRS)

    Barger, R. L.

    1975-01-01

    The pitching moment of a given airfoil is decreased by specifying appropriate modifications to its pressure distribution; by prescribing parameters in a special formula for the Theodorsen epsilon-function; and with superposition of a thickness distribution and subsequent tailoring. Advantages and disadvantages of the three methods are discussed.

  9. A double expansion method for the frequency response of finite-length beams with periodic parameters

    NASA Astrophysics Data System (ADS)

    Ying, Z. G.; Ni, Y. Q.

    2017-03-01

    A double expansion method for the frequency response of finite-length beams with periodic distribution parameters is proposed. The vibration response of the beam with spatial periodic parameters under harmonic excitations is studied. The frequency response of the periodic beam is the function of parametric period and then can be expressed by the series with the product of periodic and non-periodic functions. The procedure of the double expansion method includes the following two main steps: first, the frequency response function and periodic parameters are expanded by using identical periodic functions based on the extension of the Floquet-Bloch theorem, and the period-parametric differential equation for the frequency response is converted into a series of linear differential equations with constant coefficients; second, the solutions to the linear differential equations are expanded by using modal functions which satisfy the boundary conditions, and the linear differential equations are converted into algebraic equations according to the Galerkin method. The expansion coefficients are obtained by solving the algebraic equations and then the frequency response function is finally determined. The proposed double expansion method can uncouple the effects of the periodic expansion and modal expansion so that the expansion terms are determined respectively. The modal number considered in the second expansion can be reduced remarkably in comparison with the direct expansion method. The proposed double expansion method can be extended and applied to the other structures with periodic distribution parameters for dynamics analysis. Numerical results on the frequency response of the finite-length periodic beam with various parametric wave numbers and wave amplitude ratios are given to illustrate the effective application of the proposed method and the new frequency response characteristics, including the parameter-excited modal resonance, doubling-peak frequency response and remarkable reduction of the maximum frequency response for certain parametric wave number and wave amplitude. The results have the potential application to structural vibration control.

  10. Naima: a Python package for inference of particle distribution properties from nonthermal spectra

    NASA Astrophysics Data System (ADS)

    Zabalza, V.

    2015-07-01

    The ultimate goal of the observation of nonthermal emission from astrophysical sources is to understand the underlying particle acceleration and evolution processes, and few tools are publicly available to infer the particle distribution properties from the observed photon spectra from X-ray to VHE gamma rays. Here I present naima, an open source Python package that provides models for nonthermal radiative emission from homogeneous distribution of relativistic electrons and protons. Contributions from synchrotron, inverse Compton, nonthermal bremsstrahlung, and neutral-pion decay can be computed for a series of functional shapes of the particle energy distributions, with the possibility of using user-defined particle distribution functions. In addition, naima provides a set of functions that allow to use these models to fit observed nonthermal spectra through an MCMC procedure, obtaining probability distribution functions for the particle distribution parameters. Here I present the models and methods available in naima and an example of their application to the understanding of a galactic nonthermal source. naima's documentation, including how to install the package, is available at http://naima.readthedocs.org.

  11. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Izacard, Olivier

    2016-08-01

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.

  12. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier, E-mail: izacard@llnl.gov

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  13. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices

    PubMed Central

    Ye, Xin; Pendyala, Ram M.; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152

  14. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices.

    PubMed

    Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.

  15. Some rules for polydimensional squeezing

    NASA Technical Reports Server (NTRS)

    Manko, Vladimir I.

    1994-01-01

    The review of the following results is presented: For mixed state light of N-mode electromagnetic field described by Wigner function which has generic Gaussian form, the photon distribution function is obtained and expressed explicitly in terms of Hermite polynomials of 2N-variables. The momenta of this distribution are calculated and expressed as functions of matrix invariants of the dispersion matrix. The role of new uncertainty relation depending on photon state mixing parameter is elucidated. New sum rules for Hermite polynomials of several variables are found. The photon statistics of polymode even and odd coherent light and squeezed polymode Schroedinger cat light is given explicitly. Photon distribution for polymode squeezed number states expressed in terms of multivariable Hermite polynomials is discussed.

  16. Quantifying Parameter Sensitivity, Interaction and Transferability in Hydrologically Enhanced Versions of Noah-LSM over Transition Zones

    NASA Technical Reports Server (NTRS)

    Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue

    2009-01-01

    We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.

  17. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  18. Geometric multiaxial representation of N -qubit mixed symmetric separable states

    NASA Astrophysics Data System (ADS)

    SP, Suma; Sirsi, Swarnamala; Hegde, Subramanya; Bharath, Karthik

    2017-08-01

    The study of N -qubit mixed symmetric separable states is a longstanding challenging problem as no unique separability criterion exists. In this regard, we take up the N -qubit mixed symmetric separable states for a detailed study as these states are of experimental importance and offer an elegant mathematical analysis since the dimension of the Hilbert space is reduced from 2N to N +1 . Since there exists a one-to-one correspondence between the spin-j system and an N -qubit symmetric state, we employ Fano statistical tensor parameters for the parametrization of the spin-density matrix. Further, we use a geometric multiaxial representation (MAR) of the density matrix to characterize the mixed symmetric separable states. Since the separability problem is NP-hard, we choose to study it in the continuum limit where mixed symmetric separable states are characterized by the P -distribution function λ (θ ,ϕ ) . We show that the N -qubit mixed symmetric separable states can be visualized as a uniaxial system if the distribution function is independent of θ and ϕ . We further choose a distribution function to be the most general positive function on a sphere and observe that the statistical tensor parameters characterizing the N -qubit symmetric system are the expansion coefficients of the distribution function. As an example for the discrete case, we investigate the MAR of a uniformly weighted two-qubit mixed symmetric separable state. We also observe that there exists a correspondence between the separability and classicality of states.

  19. Models of violently relaxed galaxies

    NASA Astrophysics Data System (ADS)

    Merritt, David; Tremaine, Scott; Johnstone, Doug

    1989-02-01

    The properties of spherical self-gravitating models derived from two distribution functions that incorporate, in a crude way, the physics of violent relaxation are investigated. The first distribution function is identical to the one discussed by Stiavelli and Bertin (1985) except for a change in the sign of the 'temperature', i.e., e exp(-aE) to e exp(+aE). It is shown that these 'negative temperature' models provide a much better description of the end-state of violent relaxation than 'positive temperature' models. The second distribution function is similar to the first except for a different dependence on angular momentum. Both distribution functions yield single-parameter families of models with surface density profiles very similar to the R exp 1/4 law. Furthermore, the central concentration of models in both families increases monotonically with the velocity anisotropy, as expected in systems that formed through cold collapse.

  20. Reconstruction of the domain orientation distribution function of polycrystalline PZT ceramics using vector piezoresponse force microscopy.

    PubMed

    Kratzer, Markus; Lasnik, Michael; Röhrig, Sören; Teichert, Christian; Deluca, Marco

    2018-01-11

    Lead zirconate titanate (PZT) is one of the prominent materials used in polycrystalline piezoelectric devices. Since the ferroelectric domain orientation is the most important parameter affecting the electromechanical performance, analyzing the domain orientation distribution is of great importance for the development and understanding of improved piezoceramic devices. Here, vector piezoresponse force microscopy (vector-PFM) has been applied in order to reconstruct the ferroelectric domain orientation distribution function of polished sections of device-ready polycrystalline lead zirconate titanate (PZT) material. A measurement procedure and a computer program based on the software Mathematica have been developed to automatically evaluate the vector-PFM data for reconstructing the domain orientation function. The method is tested on differently in-plane and out-of-plane poled PZT samples, and the results reveal the expected domain patterns and allow determination of the polarization orientation distribution function at high accuracy.

  1. Influence of grain boundaries on the distribution of components in binary alloys

    NASA Astrophysics Data System (ADS)

    L'vov, P. E.; Svetukhin, V. V.

    2017-12-01

    Based on the free-energy density functional method (the Cahn-Hilliard equation), a phenomenological model that describes the influence of grain boundaries on the distribution of components in binary alloys has been developed. The model is built on the assumption of the difference between the interaction parameters of solid solution components in the bulk and at the grain boundary. The difference scheme based on the spectral method is proposed to solve the Cahn-Hilliard equation with interaction parameters depending on coordinates. Depending on the ratio between the interaction parameters in the bulk and at the grain boundary, temperature, and alloy composition, the model can give rise to different types of distribution of a dissolved component, namely, either depletion or enrichment of the grain-boundary area, preferential grainboundary precipitation, competitive precipitation in the bulk and at the grain boundary, etc.

  2. Remarkable features in lattice-parameter ratios of crystals. II. Monoclinic and triclinic crystals.

    PubMed

    de Gelder, R; Janner, A

    2005-06-01

    The frequency distributions of monoclinic crystals as a function of the lattice-parameter ratios resemble the corresponding ones of orthorhombic crystals: an exponential component, with more or less pronounced sharp peaks, with in general the most important peak at the ratio value 1. In addition, the distribution as a function of the monoclinic angle beta has a sharp peak at 90 degrees and decreases sensibly at larger angles. Similar behavior is observed for the three triclinic angular parameters alpha, beta and gamma, with characteristic differences between the organic and metal-organic, bio-macromolecular and inorganic crystals, respectively. The general behavior observed for the hexagonal, tetragonal, orthorhombic, monoclinic and triclinic crystals {in the first part of this series [de Gelder & Janner (2005). Acta Cryst. B61, 287-295] and in the present case} is summarized and commented. The data involved represent 366 800 crystals, with lattice parameters taken from the Cambridge Structural Database, CSD (294 400 entries), the Protein Data Bank, PDB (18 800 entries), and the Inorganic Crystal Structure Database, ICSD (53 600 entries). A new general structural principle is suggested.

  3. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  4. Theoretical Calculation of the Electron Transport Parameters and Energy Distribution Function for CF3I with noble gases mixtures using Monte Carlo simulation program

    NASA Astrophysics Data System (ADS)

    Jawad, Enas A.

    2018-05-01

    In this paper, The Monte Carlo simulation program has been used to calculation the electron energy distribution function (EEDF) and electric transport parameters for the gas mixtures of The trif leoroiodo methane (CF3I) ‘environment friendly’ with a noble gases (Argon, Helium, kryptos, Neon and Xenon). The electron transport parameters are assessed in the range of E/N (E is the electric field and N is the gas number density of background gas molecules) between 100 to 2000Td (1 Townsend = 10-17 V cm2) at room temperature. These parameters, namely are electron mean energy (ε), the density –normalized longitudinal diffusion coefficient (NDL) and the density –normalized mobility (μN). In contrast, the impact of CF3I in the noble gases mixture is strongly apparent in the values for the electron mean energy, the density –normalized longitudinal diffusion coefficient and the density –normalized mobility. Note in the results of the calculation agreed well with the experimental results.

  5. Electronic Structure, Dielectric Response, and Surface Charge Distribution of RGD (1FUV) Peptide

    PubMed Central

    Adhikari, Puja; Wen, Amy M.; French, Roger H.; Parsegian, V. Adrian; Steinmetz, Nicole F.; Podgornik, Rudolf; Ching, Wai-Yim

    2014-01-01

    Long and short range molecular interactions govern molecular recognition and self-assembly of biological macromolecules. Microscopic parameters in the theories of these molecular interactions are either phenomenological or need to be calculated within a microscopic theory. We report a unified methodology for the ab initio quantum mechanical (QM) calculation that yields all the microscopic parameters, namely the partial charges as well as the frequency-dependent dielectric response function, that can then be taken as input for macroscopic theories of electrostatic, polar, and van der Waals-London dispersion intermolecular forces. We apply this methodology to obtain the electronic structure of the cyclic tripeptide RGD-4C (1FUV). This ab initio unified methodology yields the relevant parameters entering the long range interactions of biological macromolecules, providing accurate data for the partial charge distribution and the frequency-dependent dielectric response function of this peptide. These microscopic parameters determine the range and strength of the intricate intermolecular interactions between potential docking sites of the RGD-4C ligand and its integrin receptor. PMID:25001596

  6. Optimization of VPSC Model Parameters for Two-Phase Titanium Alloys: Flow Stress Vs Orientation Distribution Function Metrics

    NASA Astrophysics Data System (ADS)

    Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.

    2018-06-01

    The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.

  7. AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjib; Bland-Hawthorn, Joss

    2013-08-20

    An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less

  8. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    NASA Astrophysics Data System (ADS)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  9. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  10. A computer model of molecular arrangement in a n-paraffinic liquid

    NASA Astrophysics Data System (ADS)

    Vacatello, Michele; Avitabile, Gustavo; Corradini, Paolo; Tuzi, Angela

    1980-07-01

    A computer model of a bulk liquid polymer was built to investigate the problem of local order. The model is made of C30 n-alkane molecules; it is not a lattice model, but it allows for a continuous variability of torsion angles and interchain distances, subject to realistic intra- and intermolecular potentials. Experimental x-ray scattering curves and radial distribution functions are well reproduced. Calculated properties like end-to-end distances, distribution of torsion angles, radial distribution functions, and chain direction correlation parameters, all indicate a random coil conformation and no tendency to form bundles of parallel chains.

  11. Universality of Generalized Parton Distributions in Light-Front Holographic QCD

    NASA Astrophysics Data System (ADS)

    de Téramond, Guy F.; Liu, Tianbo; Sufian, Raza Sabbir; Dosch, Hans Günter; Brodsky, Stanley J.; Deur, Alexandre; Hlfhs Collaboration

    2018-05-01

    The structure of generalized parton distributions is determined from light-front holographic QCD up to a universal reparametrization function w (x ) which incorporates Regge behavior at small x and inclusive counting rules at x →1 . A simple ansatz for w (x ) that fulfills these physics constraints with a single-parameter results in precise descriptions of both the nucleon and the pion quark distribution functions in comparison with global fits. The analytic structure of the amplitudes leads to a connection with the Veneziano model and hence to a nontrivial connection with Regge theory and the hadron spectrum.

  12. Universality of Generalized Parton Distributions in Light-Front Holographic QCD.

    PubMed

    de Téramond, Guy F; Liu, Tianbo; Sufian, Raza Sabbir; Dosch, Hans Günter; Brodsky, Stanley J; Deur, Alexandre

    2018-05-04

    The structure of generalized parton distributions is determined from light-front holographic QCD up to a universal reparametrization function w(x) which incorporates Regge behavior at small x and inclusive counting rules at x→1. A simple ansatz for w(x) that fulfills these physics constraints with a single-parameter results in precise descriptions of both the nucleon and the pion quark distribution functions in comparison with global fits. The analytic structure of the amplitudes leads to a connection with the Veneziano model and hence to a nontrivial connection with Regge theory and the hadron spectrum.

  13. On the issues of probability distribution of GPS carrier phase observations

    NASA Astrophysics Data System (ADS)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS

  14. Preisach modeling of temperature-dependent ferroelectric response of piezoceramics at sub-switching regime

    NASA Astrophysics Data System (ADS)

    Ochoa, Diego Alejandro; García, Jose Eduardo

    2016-04-01

    The Preisach model is a classical method for describing nonlinear behavior in hysteretic systems. According to this model, a hysteretic system contains a collection of simple bistable units which are characterized by an internal field and a coercive field. This set of bistable units exhibits a statistical distribution that depends on these fields as parameters. Thus, nonlinear response depends on the specific distribution function associated with the material. This model is satisfactorily used in this work to describe the temperature-dependent ferroelectric response in PZT- and KNN-based piezoceramics. A distribution function expanded in Maclaurin series considering only the first terms in the internal field and the coercive field is proposed. Changes in coefficient relations of a single distribution function allow us to explain the complex temperature dependence of hard piezoceramic behavior. A similar analysis based on the same form of the distribution function shows that the KNL-NTS properties soften around its orthorhombic to tetragonal phase transition.

  15. Distribution-centric 3-parameter thermodynamic models of partition gas chromatography.

    PubMed

    Blumberg, Leonid M

    2017-03-31

    If both parameters (the entropy, ΔS, and the enthalpy, ΔH) of the classic van't Hoff model of dependence of distribution coefficients (K) of analytes on temperature (T) are treated as the temperature-independent constants then the accuracy of the model is known to be insufficient for the needed accuracy of retention time prediction. A more accurate 3-parameter Clarke-Glew model offers a way to treat ΔS and ΔH as functions, ΔS(T) and ΔH(T), of T. A known T-centric construction of these functions is based on relating them to the reference values (ΔS ref and ΔH ref ) corresponding to a predetermined reference temperature (T ref ). Choosing a single T ref for all analytes in a complex sample or in a large database might lead to practically irrelevant values of ΔS ref and ΔH ref for those analytes that have too small or too large retention factors at T ref . Breaking all analytes in several subsets each with its own T ref leads to discontinuities in the analyte parameters. These problems are avoided in the K-centric modeling where ΔS(T) and ΔS(T) and other analyte parameters are described in relation to their values corresponding to a predetermined reference distribution coefficient (K Ref ) - the same for all analytes. In this report, the mathematics of the K-centric modeling are described and the properties of several types of K-centric parameters are discussed. It has been shown that the earlier introduced characteristic parameters of the analyte-column interaction (the characteristic temperature, T char , and the characteristic thermal constant, θ char ) are a special chromatographically convenient case of the K-centric parameters. Transformations of T-centric parameters into K-centric ones and vice-versa as well as the transformations of one set of K-centric parameters into another set and vice-versa are described. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Evaluation of Kurtosis into the product of two normally distributed variables

    NASA Astrophysics Data System (ADS)

    Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio

    2016-06-01

    Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.

  17. Accurate reconstruction of the optical parameter distribution in participating medium based on the frequency-domain radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming

    2016-12-01

    Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).

  18. Soft context clustering for F0 modeling in HMM-based speech synthesis

    NASA Astrophysics Data System (ADS)

    Khorram, Soheil; Sameti, Hossein; King, Simon

    2015-12-01

    This paper proposes the use of a new binary decision tree, which we call a soft decision tree, to improve generalization performance compared to the conventional `hard' decision tree method that is used to cluster context-dependent model parameters in statistical parametric speech synthesis. We apply the method to improve the modeling of fundamental frequency, which is an important factor in synthesizing natural-sounding high-quality speech. Conventionally, hard decision tree-clustered hidden Markov models (HMMs) are used, in which each model parameter is assigned to a single leaf node. However, this `divide-and-conquer' approach leads to data sparsity, with the consequence that it suffers from poor generalization, meaning that it is unable to accurately predict parameters for models of unseen contexts: the hard decision tree is a weak function approximator. To alleviate this, we propose the soft decision tree, which is a binary decision tree with soft decisions at the internal nodes. In this soft clustering method, internal nodes select both their children with certain membership degrees; therefore, each node can be viewed as a fuzzy set with a context-dependent membership function. The soft decision tree improves model generalization and provides a superior function approximator because it is able to assign each context to several overlapped leaves. In order to use such a soft decision tree to predict the parameters of the HMM output probability distribution, we derive the smoothest (maximum entropy) distribution which captures all partial first-order moments and a global second-order moment of the training samples. Employing such a soft decision tree architecture with maximum entropy distributions, a novel speech synthesis system is trained using maximum likelihood (ML) parameter re-estimation and synthesis is achieved via maximum output probability parameter generation. In addition, a soft decision tree construction algorithm optimizing a log-likelihood measure is developed. Both subjective and objective evaluations were conducted and indicate a considerable improvement over the conventional method.

  19. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  20. Ejected Particle Size Distributions from Shocked Metal Surfaces

    DOE PAGES

    Schauer, M. M.; Buttler, W. T.; Frayer, D. K.; ...

    2017-04-12

    Here, we present size distributions for particles ejected from features machined onto the surface of shocked Sn targets. The functional form of the size distributions is assumed to be log-normal, and the characteristic parameters of the distribution are extracted from the measured angular distribution of light scattered from a laser beam incident on the ejected particles. We also found strong evidence for a bimodal distribution of particle sizes with smaller particles evolved from features machined into the target surface and larger particles being produced at the edges of these features.

  1. Ejected Particle Size Distributions from Shocked Metal Surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schauer, M. M.; Buttler, W. T.; Frayer, D. K.

    Here, we present size distributions for particles ejected from features machined onto the surface of shocked Sn targets. The functional form of the size distributions is assumed to be log-normal, and the characteristic parameters of the distribution are extracted from the measured angular distribution of light scattered from a laser beam incident on the ejected particles. We also found strong evidence for a bimodal distribution of particle sizes with smaller particles evolved from features machined into the target surface and larger particles being produced at the edges of these features.

  2. Multidimensional stochastic approximation using locally contractive functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  3. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  4. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  5. Power-law distributions for a trapped ion interacting with a classical buffer gas.

    PubMed

    DeVoe, Ralph G

    2009-02-13

    Classical collisions with an ideal gas generate non-Maxwellian distribution functions for a single ion in a radio frequency ion trap. The distributions have power-law tails whose exponent depends on the ratio of buffer gas to ion mass. This provides a statistical explanation for the previously observed transition from cooling to heating. Monte Carlo results approximate a Tsallis distribution over a wide range of parameters and have ab initio agreement with experiment.

  6. Evidence for criticality in financial data

    NASA Astrophysics Data System (ADS)

    Ruiz, G.; de Marcos, A. F.

    2018-01-01

    We provide evidence that cumulative distributions of absolute normalized returns for the 100 American companies with the highest market capitalization, uncover a critical behavior for different time scales Δt. Such cumulative distributions, in accordance with a variety of complex - and financial - systems, can be modeled by the cumulative distribution functions of q-Gaussians, the distribution function that, in the context of nonextensive statistical mechanics, maximizes a non-Boltzmannian entropy. These q-Gaussians are characterized by two parameters, namely ( q, β), that are uniquely defined by Δt. From these dependencies, we find a monotonic relationship between q and β, which can be seen as evidence of criticality. We numerically determine the various exponents which characterize this criticality.

  7. Peculiarities of the momentum distribution functions of strongly correlated charged fermions

    NASA Astrophysics Data System (ADS)

    Larkin, A. S.; Filinov, V. S.; Fortov, V. E.

    2018-01-01

    New numerical version of the Wigner approach to quantum thermodynamics of strongly coupled systems of particles has been developed for extreme conditions, when analytical approximations based on different kinds of perturbation theories cannot be applied. An explicit analytical expression of the Wigner function has been obtained in linear and harmonic approximations. Fermi statistical effects are accounted for by effective pair pseudopotential depending on coordinates, momenta and degeneracy parameter of particles and taking into account Pauli blocking of fermions. A new quantum Monte-Carlo method for calculations of average values of arbitrary quantum operators has been developed. Calculations of the momentum distribution functions and the pair correlation functions of degenerate ideal Fermi gas have been carried out for testing the developed approach. Comparison of the obtained momentum distribution functions of strongly correlated Coulomb systems with the Maxwell-Boltzmann and the Fermi distributions shows the significant influence of interparticle interaction both at small momenta and in high energy quantum ‘tails’.

  8. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach.

    PubMed

    Demidenko, Eugene

    2017-09-01

    The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.

  9. Vibrational study and Natural Bond Orbital analysis of serotonin in monomer and dimer states by density functional theory

    NASA Astrophysics Data System (ADS)

    Borah, Mukunda Madhab; Devi, Th. Gomti

    2018-06-01

    The vibrational spectral analysis of Serotonin and its dimer were carried out using the Fourier Transform Infrared (FTIR) and Raman techniques. The equilibrium geometrical parameters, harmonic vibrational wavenumbers, Frontier orbitals, Mulliken atomic charges, Natural Bond orbitals, first order hyperpolarizability and some optimized energy parameters were computed by density functional theory with 6-31G(d,p) basis set. The detailed analysis of the vibrational spectra have been carried out by computing Potential Energy Distribution (PED, %) with the help of Vibrational Energy Distribution Analysis (VEDA) program. The second order delocalization energies E(2) confirms the occurrence of intramolecular Charge Transfer (ICT) within the molecule. The computed wavenumbers of Serotonin monomer and dimer were found in good agreement with the experimental Raman and IR values.

  10. Quantal diffusion description of multinucleon transfers in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Ayik, S.; Yilmaz, B.; Yilmaz, O.; Umar, A. S.

    2018-05-01

    Employing the stochastic mean-field (SMF) approach, we develop a quantal diffusion description of the multi-nucleon transfer in heavy-ion collisions at finite impact parameters. The quantal transport coefficients are determined by the occupied single-particle wave functions of the time-dependent Hartree-Fock equations. As a result, the primary fragment mass and charge distribution functions are determined entirely in terms of the mean-field properties. This powerful description does not involve any adjustable parameter, includes the effects of shell structure, and is consistent with the fluctuation-dissipation theorem of the nonequilibrium statistical mechanics. As a first application of the approach, we analyze the fragment mass distribution in 48Ca+ 238U collisions at the center-of-mass energy Ec.m.=193 MeV and compare the calculations with the experimental data.

  11. Analysis of current distribution in a large superconductor

    NASA Astrophysics Data System (ADS)

    Hamajima, Takataro; Alamgir, A. K. M.; Harada, Naoyuki; Tsuda, Makoto; Ono, Michitaka; Takano, Hirohisa

    An imbalanced current distribution which is often observed in cable-in-conduit (CIC) superconductors composed of multistaged, triplet type sub-cables, can deteriorate the performance of the coils. It is, hence very important to analyze the current distribution in a superconductor and find out methods to realize a homogeneous current distribution in the conductor. We apply magnetic flux conservation in a loop contoured by electric center lines of filaments in two arbitrary strands located on adjacent layers in a coaxial multilayer superconductor, and thereby analyze the current distribution in the conductor. A generalized formula governing the current distribution can be described as explicit functions of the superconductor construction parameters, such as twist pitch, twist direction and radius of individual layer. We numerically analyze a homogeneous current distribution as a function of the twist pitches of layers, using the fundamental formula. Moreover, it is demonstrated that we can control current distribution in the coaxial superconductor.

  12. Calibration and data collection protocols for reliable lattice parameter values in electron pair distribution function studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abeykoon, A. M. Milinda; Hu, Hefei; Wu, Lijun

    2015-01-30

    Different protocols for calibrating electron pair distribution function (ePDF) measurements are explored and described for quantitative studies on nanomaterials. It is found that the most accurate approach to determine the camera length is to use a standard calibration sample of Au nanoparticles from the National Institute of Standards and Technology. Different protocols for data collection are also explored, as are possible operational errors, to find the best approaches for accurate data collection for quantitative ePDF studies.

  13. Calibration and data collection protocols for reliable lattice parameter values in electron pair distribution function (ePDF) studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abeykoon, A. M. Milinda; Hu, Hefei; Wu, Lijun

    2015-02-01

    We explore and describe different protocols for calibrating electron pair distribution function (ePDF) measurements for quantitative studies on nano-materials. We find the most accurate approach to determine the camera-length is to use a standard calibration sample of Au nanoparticles from National Institute of Standards and Technology. Different protocols for data collection are also explored, as are possible operational errors, to find the best approaches for accurate data collection for quantitative ePDF studies.

  14. Current sheet in plasma as a system with a controlling parameter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fridman, Yu. A., E-mail: yulya-fridman@yandex.ru; Chukbar, K. V., E-mail: Chukbar-KV@nrcki.ru

    2015-08-15

    A simple kinetic model describing stationary solutions with bifurcated and single-peaked current density profiles of a plane electron beam or current sheet in plasma is presented. A connection is established between the two-dimensional constructions arising in terms of the model and the one-dimensional considerations by Bernstein−Greene−Kruskal facilitating the reconstruction of the distribution function of trapped particles when both the profile of the electric potential and the free particles distribution function are known.

  15. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  16. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  17. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.

  18. Topics in the Sequential Design of Experiments

    DTIC Science & Technology

    1992-03-01

    decision , unless so designated by other documentation. 12a. DISTRIBUTION /AVAILABIIUTY STATEMENT 12b. DISTRIBUTION CODE Approved for public release...3 0 1992 D 14. SUBJECT TERMS 15. NUMBER OF PAGES12 Design of Experiments, Renewal Theory , Sequential Testing 1 2. PRICE CODE Limit Theory , Local...distributions for one parameter exponential families," by Michael Woodroofe. Stntca, 2 (1991), 91-112. [6] "A non linear renewal theory for a functional of

  19. Analysis of scattering statistics and governing distribution functions in optical coherence tomography.

    PubMed

    Sugita, Mitsuro; Weatherbee, Andrew; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-07-01

    The probability density function (PDF) of light scattering intensity can be used to characterize the scattering medium. We have recently shown that in optical coherence tomography (OCT), a PDF formalism can be sensitive to the number of scatterers in the probed scattering volume and can be represented by the K-distribution, a functional descriptor for non-Gaussian scattering statistics. Expanding on this initial finding, here we examine polystyrene microsphere phantoms with different sphere sizes and concentrations, and also human skin and fingernail in vivo. It is demonstrated that the K-distribution offers an accurate representation for the measured OCT PDFs. The behavior of the shape parameter of K-distribution that best fits the OCT scattering results is investigated in detail, and the applicability of this methodology for biological tissue characterization is demonstrated and discussed.

  20. A family of models for spherical stellar systems

    NASA Technical Reports Server (NTRS)

    Tremaine, Scott; Richstone, Douglas O.; Byun, Yong-Ik; Dressler, Alan; Faber, S. M.; Grillmair, Carl; Kormendy, John; Lauer, Tod R.

    1994-01-01

    We describe a one-parameter family of models of stable sperical stellar systems in which the phase-space distribution function depends only on energy. The models have similar density profiles in their outer parts (rho propotional to r(exp -4)) and central power-law density cusps, rho proportional to r(exp 3-eta), 0 less than eta less than or = 3. The family contains the Jaffe (1983) and Hernquist (1990) models as special cases. We evaluate the surface brightness profile, the line-of-sight velocity dispersion profile, and the distribution function, and discuss analogs of King's core-fitting formula for determining mass-to-light ratio. We also generalize the models to a two-parameter family, in which the galaxy contains a central black hole; the second parameter is the mass of the black hole. Our models can be used to estimate the detectability of central black holes and the velocity-dispersion profiles of galaxies that contain central cusps, with or without a central black hole.

  1. Density functional theory and molecular dynamics study of the uranyl ion (UO₂)²⁺.

    PubMed

    Rodríguez-Jeangros, Nicolás; Seminario, Jorge M

    2014-03-01

    The detection of uranium is very important, especially in water and, more importantly, in the form of uranyl ion (UO₂)²⁺, which is one of its most abundant moieties. Here, we report analyses and simulations of uranyl in water using ab initio modified force fields for water with improved parameters and charges of uranyl. We use a TIP4P model, which allows us to obtain accurate water properties such as the boiling point and the second and third shells of water molecules in the radial distribution function thanks to a fictitious charge that corrects the 3-point models by reproducing the exact dipole moment of the water molecule. We also introduced non-bonded interaction parameters for the water-uranyl intermolecular force field. Special care was taken in testing the effect of a range of uranyl charges on the structure of uranyl-water complexes. Atomic charges of the solvated ion in water were obtained using density functional theory (DFT) calculations taking into account the presence of nitrate ions in the solution, forming a neutral ensemble. DFT-based force fields were calculated in such a way that water properties, such as the boiling point or the pair distribution function stand. Finally, molecular dynamics simulations of a water box containing uranyl cations and nitrate anions are performed at room temperature. The three peaks in the oxygen-oxygen radial distribution function for water were found to be kept in the presence of uranyl thanks to the improvement of interaction parameters and charges. Also, we found three shells of water molecules surrounding the uranyl ion instead of two as was previously thought.

  2. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  3. Numerical study of the shape parameter dependence of the local radial point interpolation method in linear elasticity.

    PubMed

    Moussaoui, Ahmed; Bouziane, Touria

    2016-01-01

    The method LRPIM is a Meshless method with properties of simple implementation of the essential boundary conditions and less costly than the moving least squares (MLS) methods. This method is proposed to overcome the singularity associated to polynomial basis by using radial basis functions. In this paper, we will present a study of a 2D problem of an elastic homogenous rectangular plate by using the method LRPIM. Our numerical investigations will concern the influence of different shape parameters on the domain of convergence,accuracy and using the radial basis function of the thin plate spline. It also will presents a comparison between numerical results for different materials and the convergence domain by precising maximum and minimum values as a function of distribution nodes number. The analytical solution of the deflection confirms the numerical results. The essential points in the method are: •The LRPIM is derived from the local weak form of the equilibrium equations for solving a thin elastic plate.•The convergence of the LRPIM method depends on number of parameters derived from local weak form and sub-domains.•The effect of distributions nodes number by varying nature of material and the radial basis function (TPS).

  4. Crop water production functions for grain sorghum and winter wheat

    USDA-ARS?s Scientific Manuscript database

    Productivity of water-limited cropping systems can be reduced by untimely distribution of water as well as cold and heat stress. The objective was to develop relationships among weather parameters, water use, and grain productivity to produce functions forecasting grain yields of grain sorghum and w...

  5. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, Anatoly V.

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  6. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE PAGES

    Radyushkin, Anatoly V.

    2017-08-28

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  7. An Empirical Mass Function Distribution

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Robotham, A. S. G.; Power, C.

    2018-03-01

    The halo mass function, encoding the comoving number density of dark matter halos of a given mass, plays a key role in understanding the formation and evolution of galaxies. As such, it is a key goal of current and future deep optical surveys to constrain the mass function down to mass scales that typically host {L}\\star galaxies. Motivated by the proven accuracy of Press–Schechter-type mass functions, we introduce a related but purely empirical form consistent with standard formulae to better than 4% in the medium-mass regime, {10}10{--}{10}13 {h}-1 {M}ȯ . In particular, our form consists of four parameters, each of which has a simple interpretation, and can be directly related to parameters of the galaxy distribution, such as {L}\\star . Using this form within a hierarchical Bayesian likelihood model, we show how individual mass-measurement errors can be successfully included in a typical analysis, while accounting for Eddington bias. We apply our form to a question of survey design in the context of a semi-realistic data model, illustrating how it can be used to obtain optimal balance between survey depth and angular coverage for constraints on mass function parameters. Open-source Python and R codes to apply our new form are provided at http://mrpy.readthedocs.org and https://cran.r-project.org/web/packages/tggd/index.html respectively.

  8. Unsaturated flow characterization utilizing water content data collected within the capillary fringe

    USGS Publications Warehouse

    Baehr, Arthur; Reilly, Timothy J.

    2014-01-01

    An analysis is presented to determine unsaturated zone hydraulic parameters based on detailed water content profiles, which can be readily acquired during hydrological investigations. Core samples taken through the unsaturated zone allow for the acquisition of gravimetrically determined water content data as a function of elevation at 3 inch intervals. This dense spacing of data provides several measurements of the water content within the capillary fringe, which are utilized to determine capillary pressure function parameters via least-squares calibration. The water content data collected above the capillary fringe are used to calculate dimensionless flow as a function of elevation providing a snapshot characterization of flow through the unsaturated zone. The water content at a flow stagnation point provides an in situ estimate of specific yield. In situ determinations of capillary pressure function parameters utilizing this method, together with particle-size distributions, can provide a valuable supplement to data libraries of unsaturated zone hydraulic parameters. The method is illustrated using data collected from plots within an agricultural research facility in Wisconsin.

  9. Wormholes and the cosmological constant problem.

    NASA Astrophysics Data System (ADS)

    Klebanov, I.

    The author reviews the cosmological constant problem and the recently proposed wormhole mechanism for its solution. Summation over wormholes in the Euclidean path integral for gravity turns all the coupling parameters into dynamical variables, sampled from a probability distribution. A formal saddle point analysis results in a distribution with a sharp peak at the cosmological constant equal to zero, which appears to solve the cosmological constant problem. He discusses the instabilities of the gravitational Euclidean path integral and the difficulties with its interpretation. He presents an alternate formalism for baby universes, based on the "third quantization" of the Wheeler-De Witt equation. This approach is analyzed in a minisuperspace model for quantum gravity, where it reduces to simple quantum mechanics. Once again, the coupling parameters become dynamical. Unfortunately, the a priori probability distribution for the cosmological constant and other parameters is typically a smooth function, with no sharp peaks.

  10. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    PubMed

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Quantitative phase imaging of platelet: assessment of cell morphology and function

    NASA Astrophysics Data System (ADS)

    Vasilenko, Irina; Vlasova, Elizaveta; Metelin, Vladislav; Agadzhanjan, B.; Lyfenko, R.

    2017-02-01

    It is well known that platelets play a central role in hemostasis and thrombosis, they also mediate tumor cell growth, dissemination and angiogenesis. The purpose of the present experiment was to evaluate living platelet size, function and morphology simultaneously in unactivated and activated states using Phase-Interference Microscope "Cytoscan" (Moscow, Russia). We enrolled 30 healthy volunteers, who had no past history of aeteriosclerosis-related disorders, such as coronary heart disease, cerebrovascular disease, hypertention, diabetes or hyperlipidemia and 30 patients with oropharynx cancer. We observed the optic-geometrical parameters of each isolated living cell and the distribution of platelets by sizes have been analysed to detect the dynamics of cell population heterogeneity. Simultaneously we identified 4 platelet forms that have different morphological features and different parameters of size distribution. We noticed that morphological platelet types correlate with morphometric platelet parameters. The data of polymorphisms of platelet reactivity in tumor progression can be used to improve patient outcomes in the cancer prevention and treatment. Moreover morphometric and functional platelet parameters can serve criteria of the efficiency of the radio- and chemotherapy carried out. In conclusion the computer phase-interference microscope provides rapid and effective analysis of living platelet morphology and function at the same time. The use of the computer phase-interference microscope could be an easy and fast method to check the state of platelets in patients with changed platelet activation and to follow a possible pharmacological therapy to reduce this phenomenon.

  12. Hybrid method for determining the parameters of condenser microphones from measured membrane velocities and numerical calculations.

    PubMed

    Barrera-Figueroa, Salvador; Rasmussen, Knud; Jacobsen, Finn

    2009-10-01

    Typically, numerical calculations of the pressure, free-field, and random-incidence response of a condenser microphone are carried out on the basis of an assumed displacement distribution of the diaphragm of the microphone; the conventional assumption is that the displacement follows a Bessel function. This assumption is probably valid at frequencies below the resonance frequency. However, at higher frequencies the movement of the membrane is heavily coupled with the damping of the air film between membrane and backplate and with resonances in the back chamber of the microphone. A solution to this problem is to measure the velocity distribution of the membrane by means of a non-contact method, such as laser vibrometry. The measured velocity distribution can be used together with a numerical formulation such as the boundary element method for estimating the microphone response and other parameters, e.g., the acoustic center. In this work, such a hybrid method is presented and examined. The velocity distributions of a number of condenser microphones have been determined using a laser vibrometer, and these measured velocity distributions have been used for estimating microphone responses and other parameters. The agreement with experimental data is generally good. The method can be used as an alternative for validating the parameters of the microphones determined by classical calibration techniques.

  13. A new approach to interpretation of heterogeneity of fluorescence decay in complex biological systems

    NASA Astrophysics Data System (ADS)

    Wlodarczyk, Jakub; Kierdaszuk, Borys

    2005-08-01

    Decays of tyrosine fluorescence in protein-ligand complexes are described by a model of continuous distribution of fluorescence lifetimes. Resulted analytical power-like decay function provides good fits to highly complex fluorescence kinetics. Moreover, this is a manifestation of so-called Tsallis q-exponential function, which is suitable for description of the systems with long-range interactions, memory effect, as well as with fluctuations of the characteristic lifetime of fluorescence. The proposed decay functions were applied to analysis of fluorescence decays of tyrosine in a protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli (the product of the deoD gene), free in aqueous solution and in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate). The power-like function provides new information about enzyme-ligand complex formation based on the physically justified heterogeneity parameter directly related to the lifetime distribution. A measure of the heterogeneity parameter in the enzyme systems is provided by a variance of fluorescence lifetime distribution. The possible number of deactivation channels and excited state mean lifetime can be easily derived without a priori knowledge of the complexity of studied system. Moreover, proposed model is simpler then traditional multi-exponential one, and better describes heterogeneous nature of studied systems.

  14. Constraints on the near-Earth asteroid obliquity distribution from the Yarkovsky effect

    NASA Astrophysics Data System (ADS)

    Tardioli, C.; Farnocchia, D.; Rozitis, B.; Cotto-Figueroa, D.; Chesley, S. R.; Statler, T. S.; Vasile, M.

    2017-12-01

    Aims: From light curve and radar data we know the spin axis of only 43 near-Earth asteroids. In this paper we attempt to constrain the spin axis obliquity distribution of near-Earth asteroids by leveraging the Yarkovsky effect and its dependence on an asteroid's obliquity. Methods: By modeling the physical parameters driving the Yarkovsky effect, we solve an inverse problem where we test different simple parametric obliquity distributions. Each distribution results in a predicted Yarkovsky effect distribution that we compare with a χ2 test to a dataset of 125 Yarkovsky estimates. Results: We find different obliquity distributions that are statistically satisfactory. In particular, among the considered models, the best-fit solution is a quadratic function, which only depends on two parameters, favors extreme obliquities consistent with the expected outcomes from the YORP effect, has a 2:1 ratio between retrograde and direct rotators, which is in agreement with theoretical predictions, and is statistically consistent with the distribution of known spin axes of near-Earth asteroids.

  15. A Quadriparametric Model to Describe the Diversity of Waves Applied to Hormonal Data.

    PubMed

    Abdullah, Saman; Bouchard, Thomas; Klich, Amna; Leiva, Rene; Pyper, Cecilia; Genolini, Christophe; Subtil, Fabien; Iwaz, Jean; Ecochard, René

    2018-05-01

    Even in normally cycling women, hormone level shapes may widely vary between cycles and between women. Over decades, finding ways to characterize and compare cycle hormone waves was difficult and most solutions, in particular polynomials or splines, do not correspond to physiologically meaningful parameters. We present an original concept to characterize most hormone waves with only two parameters. The modelling attempt considered pregnanediol-3-alpha-glucuronide (PDG) and luteinising hormone (LH) levels in 266 cycles (with ultrasound-identified ovulation day) in 99 normally fertile women aged 18 to 45. The study searched for a convenient wave description process and carried out an extended search for the best fitting density distribution. The highly flexible beta-binomial distribution offered the best fit of most hormone waves and required only two readily available and understandable wave parameters: location and scale. In bell-shaped waves (e.g., PDG curves), early peaks may be fitted with a low location parameter and a low scale parameter; plateau shapes are obtained with higher scale parameters. I-shaped, J-shaped, and U-shaped waves (sometimes the shapes of LH curves) may be fitted with high scale parameter and, respectively, low, high, and medium location parameter. These location and scale parameters will be later correlated with feminine physiological events. Our results demonstrate that, with unimodal waves, complex methods (e.g., functional mixed effects models using smoothing splines, second-order growth mixture models, or functional principal-component- based methods) may be avoided. The use, application, and, especially, result interpretation of four-parameter analyses might be advantageous within the context of feminine physiological events. Schattauer GmbH.

  16. General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1997-04-01

    To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.

  17. Effect of dust size distribution on ion-acoustic solitons in dusty plasmas with different dust grains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Dong-Ning; Yang, Yang; Yan, Qiang

    Theoretical studies are carried out for ion acoustic solitons in multicomponent nonuniform plasma considering the dust size distribution. The Korteweg−de Vries equation for ion acoustic solitons is given by using the reductive perturbation technique. Two special dust size distributions are considered. The dependences of the width and amplitude of solitons on dust size parameters are shown. It is found that the properties of a solitary wave depend on the shape of the size distribution function of dust grains.

  18. Using spatial information about recurrence risk for robust optimization of dose-painting prescription functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Edward T.

    Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when comparedmore » the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.« less

  19. Morphological study on the prediction of the site of surface slides

    Treesearch

    Hiromasa Hiura

    1991-01-01

    The annual continual occurrence of surface slides in the basin was estimated by modifying the estimation formula of Yoshimatsu. The Weibull Distribution Function revealed to be usefull for presenting the state and the transition of surface slides in the basin. Three parameters of the Weibull Function are recognized to be the linear function of the area ratio a/A. The...

  20. Cryptographic Boolean Functions with Biased Inputs

    DTIC Science & Technology

    2015-07-31

    theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the

  1. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  2. Collins-Soper equation for the energy evolution of transverse-momentum and spin dependent parton distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idilbi, Ahmad; Ji Xiangdong; Yuan Feng

    The hadron-energy evolution (Collins and Soper) equation for all the leading-twist transverse-momentum and spin dependent parton distributions is derived in the impact parameter space. Based on this equation, we present a resummation formulas for the spin dependent structure functions of the semi-inclusive deep-inelastic scattering.

  3. Logic Gates Made of N-Channel JFETs and Epitaxial Resistors

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.

    2008-01-01

    Prototype logic gates made of n-channel junction field-effect transistors (JFETs) and epitaxial resistors have been demonstrated, with a view toward eventual implementation of digital logic devices and systems in silicon carbide (SiC) integrated circuits (ICs). This development is intended to exploit the inherent ability of SiC electronic devices to function at temperatures from 300 to somewhat above 500 C and withstand large doses of ionizing radiation. SiC-based digital logic devices and systems could enable operation of sensors and robots in nuclear reactors, in jet engines, near hydrothermal vents, and in other environments that are so hot or radioactive as to cause conventional silicon electronic devices to fail. At present, current needs for digital processing at high temperatures exceed SiC integrated circuit production capabilities, which do not allow for highly integrated circuits. Only single to small number component production of depletion mode n-channel JFETs and epitaxial resistors on a single substrate is possible. As a consequence, the fine matching of components is impossible, resulting in rather large direct-current parameter distributions within a group of transistors typically spanning multiples of 5 to 10. Add to this the lack of p-channel devices to complement the n-channel FETs, the lack of precise dropping diodes, and the lack of enhancement mode devices at these elevated temperatures and the use of conventional direct coupled and buffered direct coupled logic gate design techniques is impossible. The presented logic gate design is tolerant of device parameter distributions and is not hampered by the lack of complementary devices or dropping diodes. In addition to n-channel JFETs, these gates include level-shifting and load resistors (see figure). Instead of relying on precise matching of parameters among individual JFETS, these designs rely on choosing the values of these resistors and of supply potentials so as to make the circuits perform the desired functions throughout the ranges over which the parameters of the JFETs are distributed. The supply rails V(sub dd) and V(sub ss) and the resistors R are chosen as functions of the distribution of direct-current operating parameters of the group of transistors used.

  4. Probabilistic structural analysis of a truss typical for space station

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.

    1990-01-01

    A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.

  5. Collisional evolution - an analytical study for the non steady-state mass distribution.

    NASA Astrophysics Data System (ADS)

    Vieira Martins, R.

    1999-05-01

    To study the collisional evolution of asteroidal groups one can use an analytical solution for the self-similar collision cascades. This solution is suitable to study the steady-state mass distribution of the collisional fragmentation. However, out of the steady-state conditions, this solution is not satisfactory for some values of the collisional parameters. In fact, for some values for the exponent of the mass distribution power law of an asteroidal group and its relation to the exponent of the function which describes "how rocks break" the author arrives at singular points for the equation which describes the collisional evolution. These singularities appear since some approximations are usually made in the laborious evaluation of many integrals that appear in the analytical calculations. They concern the cutoff for the smallest and the largest bodies. These singularities set some restrictions to the study of the analytical solution for the collisional equation. To overcome these singularities the author performed an algebraic computation considering the smallest and the largest bodies and he obtained the analytical expressions for the integrals that describe the collisional evolution without restriction on the parameters. However, the new distribution is more sensitive to the values of the collisional parameters. In particular the steady-state solution for the differential mass distribution has exponents slightly different from 11/6 for the usual parameters in the asteroid belt. The sensitivity of this distribution with respect to the parameters is analyzed for the usual values in the asteroidal groups. With an expression for the mass distribution without singularities, one can evaluate also its time evolution. The author arrives at an analytical expression given by a power series of terms constituted by a small parameter multiplied by the mass to an exponent, which depends on the initial power law distribution. This expression is a formal solution for the equation which describes the collisional evolution.

  6. Functional models for colloid retention in porous media at the triple line.

    PubMed

    Dathe, Annette; Zevi, Yuniati; Richards, Brian K; Gao, Bin; Parlange, J-Yves; Steenhuis, Tammo S

    2014-01-01

    Spectral confocal microscope visualizations of microsphere movement in unsaturated porous media showed that attachment at the Air Water Solid (AWS) interface was an important retention mechanism. These visualizations can aid in resolving the functional form of retention rates of colloids at the AWS interface. In this study, soil adsorption isotherm equations were adapted by replacing the chemical concentration in the water as independent variable by the cumulative colloids passing by. In order of increasing number of fitted parameters, the functions tested were the Langmuir adsorption isotherm, the Logistic distribution, and the Weibull distribution. The functions were fitted against colloid concentrations obtained from time series of images acquired with a spectral confocal microscope for three experiments performed where either plain or carboxylated polystyrene latex microspheres were pulsed in a small flow chamber filled with cleaned quartz sand. Both moving and retained colloids were quantified over time. In fitting the models to the data, the agreement improved with increasing number of model parameters. The Weibull distribution gave overall the best fit. The logistic distribution did not fit the initial retention of microspheres well but otherwise the fit was good. The Langmuir isotherm only fitted the longest time series well. The results can be explained that initially when colloids are first introduced the rate of retention is low. Once colloids are at the AWS interface they act as anchor point for other colloids to attach and thereby increasing the retention rate as clusters form. Once the available attachment sites diminish, the retention rate decreases.

  7. The correlation function for density perturbations in an expanding universe. II - Nonlinear theory

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1977-01-01

    A formalism is developed to find the two-point and higher-order correlation functions for a given distribution of sizes and shapes of perturbations which are randomly placed in three-dimensional space. The perturbations are described by two parameters such as central density and size, and the two-point correlation function is explicitly related to the luminosity function of groups and clusters of galaxies

  8. 3D glasma initial state for relativistic heavy ion collisions

    DOE PAGES

    Schenke, Björn; Schlichting, Sören

    2016-10-13

    We extend the impact-parameter-dependent Glasma model to three dimensions using explicit small-x evolution of the two incoming nuclear gluon distributions. We compute rapidity distributions of produced gluons and the early-time energy momentum tensor as a function of space-time rapidity and transverse coordinates. Finally, we study rapidity correlations and fluctuations of the initial geometry and multiplicity distributions and make comparisons to existing models for the three-dimensional initial state.

  9. Wave propagation in embedded inhomogeneous nanoscale plates incorporating thermal effects

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Farzad; Barati, Mohammad Reza; Dabbagh, Ali

    2018-04-01

    In this article, an analytical approach is developed to study the effects of thermal loading on the wave propagation characteristics of an embedded functionally graded (FG) nanoplate based on refined four-variable plate theory. The heat conduction equation is solved to derive the nonlinear temperature distribution across the thickness. Temperature-dependent material properties of nanoplate are graded using Mori-Tanaka model. The nonlocal elasticity theory of Eringen is introduced to consider small-scale effects. The governing equations are derived by the means of Hamilton's principle. Obtained frequencies are validated with those of previously published works. Effects of different parameters such as temperature distribution, foundation parameters, nonlocal parameter, and gradient index on the wave propagation response of size-dependent FG nanoplates have been investigated.

  10. Ion-Acoustic Double-Layers in Plasmas with Nonthermal Electrons

    NASA Astrophysics Data System (ADS)

    Rios, L. A.; Galvão, R. M. O.

    2014-12-01

    A double layer (DL) consists of a positive/negative Debye sheath, connecting two quasineutral regions of a plasma. These nonlinear structures can be found in a variety of plasmas, from discharge tubes to space plasmas. It has applications to plasma processing and space propulsion, and its concept is also important for areas such as applied geophysics. In the present work we investigate the ion-acoustic double-layers (IADLs). It is believed that these structures are responsible for the acceleration of auroral electrons, for example. The plasma distributions near a DL are usually non-Maxwellian and can be modeled via a κ distribution function. In its reduced form, the standard κ distribution is equivalent to the distribution function obtained from the maximization of the Tsallis entropy, the q distribution. The parameters κ and q measure the deviation from the Maxwellian equilibrium ("nonthermality"), with -κ=1/(1-q) (in the limit κ → ∞ (q → 1) the Maxwellian distribution is recovered). The existence of obliquely propagating IADLs in magnetized two-electron plasmas is investigated, with the hot electron population modeled via a κ distribution function [1]. Our analysis shows that only subsonic and rarefactive DLs exist for the entire range of parameters investigated. The small amplitude DLs exist only for τ=Th/Tc greater than a critical value, which grows as κ decreases. We also observe that these structures exist only for large values of δ=Nh0/N0, but never for δ=1. In our model, which assumes a quasineutral condition, the Mach number M grows as θ decreases (θ is the angle between the directions of the external magnetic field and wave propagation). However, M as well as the DL amplitude are reduced as a consequence of nonthermality. The relation of the quasineutral condition and the functional form of the distribution function with the nonexistence of IADLs has also been analyzed and some interesting results have been obtained. A more detailed discussion about this topic will be presented during the conference. References: [1] L. A. Rios and R. M. O. Galvão, Phys. Plasmas 20, 112301 (2013).

  11. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  12. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  13. Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Weaver, Aaron S.

    2003-01-01

    Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.

  14. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE PAGES

    Izacard, Olivier

    2016-08-02

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  15. The inclusion of capillary distribution in the adiabatic tissue homogeneity model of blood flow

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zeman, V.; Darko, J.; Lee, T.-Y.; Milosevic, M. F.; Haider, M.; Warde, P.; Yeung, I. W. T.

    2001-05-01

    We have developed a non-invasive imaging tracer kinetic model for blood flow which takes into account the distribution of capillaries in tissue. Each individual capillary is assumed to follow the adiabatic tissue homogeneity model. The main strength of our new model is in its ability to quantify the functional distribution of capillaries by the standard deviation in the time taken by blood to pass through the tissue. We have applied our model to the human prostate and have tested two different types of distribution functions. Both distribution functions yielded very similar predictions for the various model parameters, and in particular for the standard deviation in transit time. Our motivation for developing this model is the fact that the capillary distribution in cancerous tissue is drastically different from in normal tissue. We believe that there is great potential for our model to be used as a prognostic tool in cancer treatment. For example, an accurate knowledge of the distribution in transit times might result in an accurate estimate of the degree of tumour hypoxia, which is crucial to the success of radiation therapy.

  16. Modeling the bidirectional reflectance distribution function of mixed finite plant canopies and soil

    NASA Technical Reports Server (NTRS)

    Schluessel, G.; Dickinson, R. E.; Privette, J. L.; Emery, W. J.; Kokaly, R.

    1994-01-01

    An analytical model of the bidirectional reflectance for optically semi-infinite plant canopies has been extended to describe the reflectance of finite depth canopies contributions from the underlying soil. The model depends on 10 independent parameters describing vegetation and soil optical and structural properties. The model is inverted with a nonlinear minimization routine using directional reflectance data for lawn (leaf area index (LAI) is equal to 9.9), soybeans (LAI, 2.9) and simulated reflectance data (LAI, 1.0) from a numerical bidirectional reflectance distribution function (BRDF) model (Myneni et al., 1988). While the ten-parameter model results in relatively low rms differences for the BRDF, most of the retrieved parameters exhibit poor stability. The most stable parameter was the single-scattering albedo of the vegetation. Canopy albedo could be derived with an accuracy of less than 5% relative error in the visible and less than 1% in the near-infrared. Sensitivity were performed to determine which of the 10 parameters were most important and to assess the effects of Gaussian noise on the parameter retrievals. Out of the 10 parameters, three were identified which described most of the BRDF variability. At low LAI values the most influential parameters were the single-scattering albedos (both soil and vegetation) and LAI, while at higher LAI values (greater than 2.5) these shifted to the two scattering phase function parameters for vegetation and the single-scattering albedo of the vegetation. The three-parameter model, formed by fixing the seven least significant parameters, gave higher rms values but was less sensitive to noise in the BRDF than the full ten-parameter model. A full hemispherical reflectance data set for lawn was then interpolated to yield BRDF values corresponding to advanced very high resolution radiometer (AVHRR) scan geometries collected over a period of nine days. The resulting parameters and BRDFs are similar to those for the full sampling geometry, suggesting that the limited geometry of AVHRR measurements might be used to reliably retrieve BRDF and canopy albedo with this model.

  17. Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.

    NASA Astrophysics Data System (ADS)

    Maheras, Steven James

    Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.

  18. Dynamic response of porous functionally graded material nanobeams subjected to moving nanoparticle based on nonlocal strain gradient theory

    NASA Astrophysics Data System (ADS)

    Barati, Mohammad Reza

    2017-11-01

    Up to now, nonlocal strain gradient theory (NSGT) is broadly applied to examine free vibration, static bending and buckling of nanobeams. This theory captures nonlocal stress field effects together with the microstructure-dependent strain gradient effects. In this study, forced vibrations of NSGT nanobeams on elastic substrate subjected to moving loads are examined. The nanobeam is made of functionally graded material (FGM) with even and uneven porosity distributions inside the material structure. The graded material properties with porosities are described by a modified power-law model. Dynamic deflection of the nanobeam is obtained via Galerkin and inverse Laplace transform methods. The importance of nonlocal parameter, strain gradient parameter, moving load velocity, porosity volume fraction, type of porosity distribution and elastic foundation on forced vibration behavior of nanobeams are discussed.

  19. Total Water-Vapor Distribution in the Summer Cloudless Atmosphere over the South of Western Siberia

    NASA Astrophysics Data System (ADS)

    Troshkin, D. N.; Bezuglova, N. N.; Kabanov, M. V.; Pavlov, V. E.; Sokolov, K. I.; Sukovatov, K. Yu.

    2017-12-01

    The spatial distribution of the total water vapor in different climatic zones of the south of Western Siberia in summer of 2008-2011 is studied on the basis of Envisat data. The correlation analysis of the water-vapor time series from the Envisat data W and radiosonde observations w for the territory of Omsk aerological station show that the absolute values of W and w are linearly correlated with a coefficient of 0.77 (significance level p < 0.05). The distribution functions of the total water vapor are calculated based on the number of its measurements by Envisat for a cloudless sky of three zones with different physical properties of the underlying surface, in particular, steppes to the south of the Vasyugan Swamp and forests to the northeast of the Swamp. The distribution functions are bimodal; each mode follows the lognormal law. The parameters of these functions are given.

  20. Estimation of slip distribution using an inverse method based on spectral decomposition of Green's function utilizing Global Positioning System (GPS) data

    NASA Astrophysics Data System (ADS)

    Jin, Honglin; Kato, Teruyuki; Hori, Muneo

    2007-07-01

    An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.

  1. The influence of non-Gaussian distribution functions on the time-dependent perpendicular transport of energetic particles

    NASA Astrophysics Data System (ADS)

    Lasuik, J.; Shalchi, A.

    2018-06-01

    In the current paper we explore the influence of the assumed particle statistics on the transport of energetic particles across a mean magnetic field. In previous work the assumption of a Gaussian distribution function was standard, although there have been known cases for which the transport is non-Gaussian. In the present work we combine a kappa distribution with the ordinary differential equation provided by the so-called unified non-linear transport theory. We then compute running perpendicular diffusion coefficients for different values of κ and turbulence configurations. We show that changing the parameter κ slightly increases or decreases the perpendicular diffusion coefficient depending on the considered turbulence configuration. Since these changes are small, we conclude that the assumed statistics is less significant in particle transport theory. The results obtained in the current paper support to use a Gaussian distribution function as usually done in particle transport theory.

  2. Estimation of discontinuous coefficients in parabolic systems: Applications to reservoir simulation

    NASA Technical Reports Server (NTRS)

    Lamm, P. D.

    1984-01-01

    Spline based techniques for estimating spatially varying parameters that appear in parabolic distributed systems (typical of those found in reservoir simulation problems) are presented. The problem of determining discontinuous coefficients, estimating both the functional shape and points of discontinuity for such parameters is discussed. Convergence results and a summary of numerical performance of the resulting algorithms are given.

  3. Investigation of the Specht density estimator

    NASA Technical Reports Server (NTRS)

    Speed, F. M.; Rydl, L. M.

    1971-01-01

    The feasibility of using the Specht density estimator function on the IBM 360/44 computer is investigated. Factors such as storage, speed, amount of calculations, size of the smoothing parameter and sample size have an effect on the results. The reliability of the Specht estimator for normal and uniform distributions and the effects of the smoothing parameter and sample size are investigated.

  4. Hadron Spectra in p+p Collisions at Rhic and Lhc Energies

    NASA Astrophysics Data System (ADS)

    Khandai, P. K.; Sett, P.; Shukla, P.; Singh, V.

    2013-06-01

    We present the systematic analysis of transverse momentum (pT) spectra of identified hadrons in p+p collisions at Relativistic Heavy Ion Collider (√ {s} = 62.4 and 200 GeV) and at Large Hadron Collider (LHC) energies (√ {s} = 0.9, 2.76 and 7.0 TeV) using phenomenological fit functions. We review various forms of Hagedorn and Tsallis distributions and show their equivalence. We use Tsallis distribution which successfully describes the spectra in p+p collisions using two parameters, Tsallis temperature T which governs the soft bulk spectra and power n which determines the initial production in partonic collisions. We obtain these parameters for pions, kaons and protons as a function of center-of-mass energy (√ {s}). It is found that the parameter T has a weak but decreasing trend with increasing √ {s}. The parameter n decreases with increasing √ {s} which shows that production of hadrons at higher energies are increasingly dominated by point like qq scatterings. Another important observation is with increasing √ {s}, the separation between the powers for protons and pions narrows down hinting that the baryons and mesons are governed by same production process as one moves to the highest LHC energy.

  5. A Bayesian kriging approach for blending satellite and ground precipitation observations

    USGS Publications Warehouse

    Verdin, Andrew P.; Rajagopalan, Balaji; Kleiber, William; Funk, Christopher C.

    2015-01-01

    Drought and flood management practices require accurate estimates of precipitation. Gauge observations, however, are often sparse in regions with complicated terrain, clustered in valleys, and of poor quality. Consequently, the spatial extent of wet events is poorly represented. Satellite-derived precipitation data are an attractive alternative, though they tend to underestimate the magnitude of wet events due to their dependency on retrieval algorithms and the indirect relationship between satellite infrared observations and precipitation intensities. Here we offer a Bayesian kriging approach for blending precipitation gauge data and the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates for Central America, Colombia, and Venezuela. First, the gauge observations are modeled as a linear function of satellite-derived estimates and any number of other variables—for this research we include elevation. Prior distributions are defined for all model parameters and the posterior distributions are obtained simultaneously via Markov chain Monte Carlo sampling. The posterior distributions of these parameters are required for spatial estimation, and thus are obtained prior to implementing the spatial kriging model. This functional framework is applied to model parameters obtained by sampling from the posterior distributions, and the residuals of the linear model are subject to a spatial kriging model. Consequently, the posterior distributions and uncertainties of the blended precipitation estimates are obtained. We demonstrate this method by applying it to pentadal and monthly total precipitation fields during 2009. The model's performance and its inherent ability to capture wet events are investigated. We show that this blending method significantly improves upon the satellite-derived estimates and is also competitive in its ability to represent wet events. This procedure also provides a means to estimate a full conditional distribution of the “true” observed precipitation value at each grid cell.

  6. A Model Based on Environmental Factors for Diameter Distribution in Black Wattle in Brazil

    PubMed Central

    Sanquetta, Carlos Roberto; Behling, Alexandre; Dalla Corte, Ana Paula; Péllico Netto, Sylvio; Rodrigues, Aurelio Lourenço; Simon, Augusto Arlindo

    2014-01-01

    This article discusses the dynamics of a diameter distribution in stands of black wattle throughout its growth cycle using the Weibull probability density function. Moreover, the parameters of this distribution were related to environmental variables from meteorological data and surface soil horizon with the aim of finding a model for diameter distribution which their coefficients were related to the environmental variables. We found that the diameter distribution of the stand changes only slightly over time and that the estimators of the Weibull function are correlated with various environmental variables, with accumulated rainfall foremost among them. Thus, a model was obtained in which the estimators of the Weibull function are dependent on rainfall. Such a function can have important applications, such as in simulating growth potential in regions where historical growth data is lacking, as well as the behavior of the stand under different environmental conditions. The model can also be used to project growth in diameter, based on the rainfall affecting the forest over a certain time period. PMID:24932909

  7. PIC simulations of a three component plasma described by Kappa distribution functions as observed in Saturn's magnetosphere

    NASA Astrophysics Data System (ADS)

    Barbosa, Marcos; Alves, Maria Virginia; Simões Junior, Fernando

    2016-04-01

    In plasmas out of thermodynamic equilibrium the particle velocity distribution can be described by the so called Kappa distribution. These velocity distribution functions are a generalization of the Maxwellian distribution. Since 1960, Kappa velocity distributions were observed in several regions of interplanetary space and astrophysical plasmas. Using KEMPO1 particle simulation code, modified to introduce Kappa distribution functions as initial conditions for particle velocities, the normal modes of propagation were analyzed in a plasma containing two species of electrons with different temperatures and densities and ions as a third specie.This type of plasma is usually found in magnetospheres such as in Saturn. Numerical solutions for the dispersion relation for such a plasma predict the presence of an electron-acoustic mode, besides the Langmuir and ion-acoustic modes. In the presence of an ambient magnetic field, the perpendicular propagation (Bernstein mode) also changes, as compared to a Maxwellian plasma, due to the Kappa distribution function. Here results for simulations with and without external magnetic field are presented. The parameters for the initial conditions in the simulations were obtained from the Cassini spacecraft data. Simulation results are compared with numerical solutions of the dispersion relation obtained in the literature and they are in good agreement.

  8. Static shape control for flexible structures

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    An integrated methodology is described for defining static shape control laws for large flexible structures. The techniques include modeling, identifying and estimating the control laws of distributed systems characterized in terms of infinite dimensional state and parameter spaces. The models are expressed as interconnected elliptic partial differential equations governing a range of static loads, with the capability of analyzing electromagnetic fields around antenna systems. A second-order analysis is carried out for statistical errors, and model parameters are determined by maximizing an appropriate defined likelihood functional which adjusts the model to observational data. The parameter estimates are derived from the conditional mean of the observational data, resulting in a least squares superposition of shape functions obtained from the structural model.

  9. Systematics of capture and fusion dynamics in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui

    2017-03-01

    We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.

  10. Investigating the relationship between a soils classification and the spatial parameters of a conceptual catchment-scale hydrological model

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Lilly, A.

    2001-10-01

    There are now many examples of hydrological models that utilise the capabilities of Geographic Information Systems to generate spatially distributed predictions of behaviour. However, the spatial variability of hydrological parameters relating to distributions of soils and vegetation can be hard to establish. In this paper, the relationship between a soil hydrological classification Hydrology of Soil Types (HOST) and the spatial parameters of a conceptual catchment-scale model is investigated. A procedure involving inverse modelling using Monte-Carlo simulations on two catchments is developed to identify relative values for soil related parameters of the DIY model. The relative values determine the internal variability of hydrological processes as a function of the soil type. For three out of the four soil parameters studied, the variability between HOST classes was found to be consistent across two catchments when tested independently. Problems in identifying values for the fourth 'fast response distance' parameter have highlighted a potential limitation with the present structure of the model. The present assumption that this parameter can be related simply to soil type rather than topography appears to be inadequate. With the exclusion of this parameter, calibrated parameter sets from one catchment can be converted into equivalent parameter sets for the alternate catchment on the basis of their HOST distributions, to give a reasonable simulation of flow. Following further testing on different catchments, and modifications to the definition of the fast response distance parameter, the technique provides a methodology whereby it is possible to directly derive spatial soil parameters for new catchments.

  11. Assessing hail risk for a building portfolio by generating stochastic events

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Choffet, Marc; Demierre, Jonathan; Imhof, Markus; Jaboyedoff, Michel; Nguyen, Liliane; Voumard, Jérémie

    2015-04-01

    Among the natural hazards affecting buildings, hail is one of the most costly and is nowadays a major concern for building insurance companies. In Switzerland, several costly events were reported these last years, among which the July 2011 event, which cost around 125 million EUR to the Aargauer public insurance company (North-western Switzerland). This study presents the new developments in a stochastic model which aims at evaluating the risk for a building portfolio. Thanks to insurance and meteorological radar data of the 2011 Aargauer event, vulnerability curves are proposed by comparing the damage rate to the radar intensity (i.e. the maximum hailstone size reached during the event, deduced from the radar signal). From these data, vulnerability is defined by a two-step process. The first step defines the probability for a building to be affected (i.e. to claim damages), while the second, if the building is affected, attributes a damage rate to the building from a probability distribution specific to the intensity class. To assess the risk, stochastic events are then generated by summing a set of Gaussian functions with 6 random parameters (X and Y location, maximum hailstone size, standard deviation, eccentricity and orientation). The location of these functions is constrained by a general event shape and by the position of the previously defined functions of the same event. For each generated event, the total cost is calculated in order to obtain a distribution of event costs. The general events parameters (shape, size, …) as well as the distribution of the Gaussian parameters are inferred from two radar intensity maps, namely the one of the aforementioned event, and a second from an event which occurred in 2009. After a large number of simulations, the hailstone size distribution obtained in different regions is compared to the distribution inferred from pre-existing hazard maps, built from a larger set of radar data. The simulation parameters are then adjusted by trial and error, in order to get the best reproduction of the expected distributions. The value of the mean annual risk obtained using the model is also compared to the mean annual risk calculated using directly the hazard maps. According to the first results, the return period of an event inducing a total damage cost equal or greater than 125 million EUR for the Aargauer insurance company would be of around 10 to 40 years.

  12. On the effect of velocity gradients on the depth of correlation in μPIV

    NASA Astrophysics Data System (ADS)

    Mustin, B.; Stoeber, B.

    2016-03-01

    The present work revisits the effect of velocity gradients on the depth of the measurement volume (depth of correlation) in microscopic particle image velocimetry (μPIV). General relations between the μPIV weighting functions and the local correlation function are derived from the original definition of the weighting functions. These relations are used to investigate under which circumstances the weighting functions are related to the curvature of the local correlation function. Furthermore, this work proposes a modified definition of the depth of correlation that leads to more realistic results than previous definitions for the case when flow gradients are taken into account. Dimensionless parameters suitable to describe the effect of velocity gradients on μPIV cross correlation are derived and visual interpretations of these parameters are proposed. We then investigate the effect of the dimensionless parameters on the weighting functions and the depth of correlation for different flow fields with spatially constant flow gradients and with spatially varying gradients. Finally this work demonstrates that the results and dimensionless parameters are not strictly bound to a certain model for particle image intensity distributions but are also meaningful when other models for particle images are used.

  13. Influence of emphysema distribution on pulmonary function parameters in COPD patients

    PubMed Central

    Bastos, Helder Novais e; Neves, Inês; Redondo, Margarida; Cunha, Rui; Pereira, José Miguel; Magalhães, Adriana; Fernandes, Gabriela

    2015-01-01

    ABSTRACT OBJECTIVE: To evaluate the impact that the distribution of emphysema has on clinical and functional severity in patients with COPD. METHODS: The distribution of the emphysema was analyzed in COPD patients, who were classified according to a 5-point visual classification system of lung CT findings. We assessed the influence of emphysema distribution type on the clinical and functional presentation of COPD. We also evaluated hypoxemia after the six-minute walk test (6MWT) and determined the six-minute walk distance (6MWD). RESULTS: Eighty-six patients were included. The mean age was 65.2 ± 12.2 years, 91.9% were male, and all but one were smokers (mean smoking history, 62.7 ± 38.4 pack-years). The emphysema distribution was categorized as obviously upper lung-predominant (type 1), in 36.0% of the patients; slightly upper lung-predominant (type 2), in 25.6%; homogeneous between the upper and lower lung (type 3), in 16.3%; and slightly lower lung-predominant (type 4), in 22.1%. Type 2 emphysema distribution was associated with lower FEV1, FVC, FEV1/FVC ratio, and DLCO. In comparison with the type 1 patients, the type 4 patients were more likely to have an FEV1 < 65% of the predicted value (OR = 6.91, 95% CI: 1.43-33.45; p = 0.016), a 6MWD < 350 m (OR = 6.36, 95% CI: 1.26-32.18; p = 0.025), and post-6MWT hypoxemia (OR = 32.66, 95% CI: 3.26-326.84; p = 0.003). The type 3 patients had a higher RV/TLC ratio, although the difference was not significant. CONCLUSIONS: The severity of COPD appears to be greater in type 4 patients, and type 3 patients tend to have greater hyperinflation. The distribution of emphysema could have a major impact on functional parameters and should be considered in the evaluation of COPD patients. PMID:26785956

  14. A distributed parameter electromechanical model for bimorph piezoelectric energy harvesters based on the refined zigzag theory

    NASA Astrophysics Data System (ADS)

    Chen, Chung-De

    2018-04-01

    In this paper, a distributed parameter electromechanical model for bimorph piezoelectric energy harvesters based on the refined zigzag theory (RZT) is developed. In this model, the zigzag function is incorporated into the axial displacement, and the zigzag distribution of the displacement between the adjacent layers of the bimorph structure can be considered. The governing equations, including three equations of motions and one equation of circuit, are derived using Hamilton’s principle. The natural frequency, its corresponding modal function and the steady state response of the base excitation motion are given in exact forms. The presented results are benchmarked with the finite element method and two beam theories, the first-order shear deformation theory and the classical beam theory. Comparing examples shows that the RZT provides predictions of output voltage and generated power at high accuracy, especially for the case of a soft middle layer. Variation of the parameters, such as the beam thickness, excitation frequencies and the external electrical loads, is investigated and its effects on the performance of the energy harvesters are studied by using the RZT developed in this paper. Based on this refined theory, analysts and engineers can capture more details on the electromechanical behavior of piezoelectric harvesters.

  15. Deployment strategy for battery energy storage system in distribution network based on voltage violation regulation

    NASA Astrophysics Data System (ADS)

    Wu, H.; Zhou, L.; Xu, T.; Fang, W. L.; He, W. G.; Liu, H. M.

    2017-11-01

    In order to improve the situation of voltage violation caused by the grid-connection of photovoltaic (PV) system in a distribution network, a bi-level programming model is proposed for battery energy storage system (BESS) deployment. The objective function of inner level programming is to minimize voltage violation, with the power of PV and BESS as the variables. The objective function of outer level programming is to minimize the comprehensive function originated from inner layer programming and all the BESS operating parameters, with the capacity and rated power of BESS as the variables. The differential evolution (DE) algorithm is applied to solve the model. Based on distribution network operation scenarios with photovoltaic generation under multiple alternative output modes, the simulation results of IEEE 33-bus system prove that the deployment strategy of BESS proposed in this paper is well adapted to voltage violation regulation invariable distribution network operation scenarios. It contributes to regulating voltage violation in distribution network, as well as to improve the utilization of PV systems.

  16. On the mass function of stars growing in a flocculent medium

    NASA Astrophysics Data System (ADS)

    Maschberger, Th.

    2013-12-01

    Stars form in regions of very inhomogeneous densities and may have chaotic orbital motions. This leads to a time variation of the accretion rate, which will spread the masses over some mass range. We investigate the mass distribution functions that arise from fluctuating accretion rates in non-linear accretion, ṁ ∝ mα. The distribution functions evolve in time and develop a power-law tail attached to a lognormal body, like in numerical simulations of star formation. Small fluctuations may be modelled by a Gaussian and develop a power-law tail ∝ m-α at the high-mass side for α > 1 and at the low-mass side for α < 1. Large fluctuations require that their distribution is strictly positive, for example, lognormal. For positive fluctuations the mass distribution function develops the power-law tail always at the high-mass hand side, independent of α larger or smaller than unity. Furthermore, we discuss Bondi-Hoyle accretion in a supersonically turbulent medium, the range of parameters for which non-linear stochastic growth could shape the stellar initial mass function, as well as the effects of a distribution of initial masses and growth times.

  17. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.

  18. Exploring the squeezed three-point galaxy correlation function with generalized halo occupation distribution models

    NASA Astrophysics Data System (ADS)

    Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.

    2018-04-01

    We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.

  19. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  20. Zero-truncated negative binomial - Erlang distribution

    NASA Astrophysics Data System (ADS)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  1. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  2. Evolution of arbitrary moments of radiant intensity distribution for partially coherent general beams in atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Dan, Youquan; Xu, Yonggen

    2018-04-01

    The evolution law of arbitrary order moments of the Wigner distribution function, which can be applied to the different spatial power spectra, is obtained for partially coherent general beams propagating in atmospheric turbulence using the extended Huygens-Fresnel principle. A coupling coefficient of radiant intensity distribution (RID) in turbulence is introduced. Analytical expressions of the evolution of the first five-order moments, kurtosis parameter, coupling coefficient of RID for general beams in turbulence are derived, and the formulas are applied to Airy beams. Results show that there exist two types for general beams in turbulence. A larger value of kurtosis parameter for Airy beams also reveals that coupling effect due to turbulence is stronger. Both theoretical analysis and numerical results show that the maximum value of kurtosis parameter for an Airy beam in turbulence is independent of turbulence strength parameter and is only determined by inner scale of turbulence. Relative angular spread, kurtosis and coupling coefficient are less influenced by turbulence for Airy beams with a smaller decay factor and a smaller initial width of the first lobe.

  3. Functional parameter screening for predicting durability of rolling sliding contacts with different surface finishes

    NASA Astrophysics Data System (ADS)

    Dimkovski, Z.; Lööf, P.-J.; Rosén, B.-G.; Nilsson, P. H.

    2018-06-01

    The reliability and lifetime of machine elements such as gears and rolling bearings depend on their wear and fatigue resistance. In order to screen the wear and surface damage, three finishing processes: (i) brushing, (ii) manganese phosphating and (iii) shot peening were applied on three disc pairs and long-term tested on a twin-disc tribometer. In this paper, the elastic contact of the disc surfaces (measured after only few revolutions) was simulated and a number of functional and roughness parameters were correlated. The functional parameters consisted of subsurface stresses at different depths and a new parameter called ‘pressure spikes’ factor’. The new parameter is derived from the pressure distribution and takes into account the proximity and magnitude of the pressure spikes. Strong correlations were found among the pressure spikes’ factor and surface peak/height parameters. The orthogonal shear stresses and Von Mises stresses at the shallowest depths under the surface have shown the highest correlations but no good correlations were found when the statistics of the whole stress fields was analyzed. The use of the new parameter offers a fast way to screen the durability of the contacting surfaces operating at similar conditions.

  4. Statistical characterization of discrete conservative systems: The web map

    NASA Astrophysics Data System (ADS)

    Ruiz, Guiomar; Tirnakli, Ugur; Borges, Ernesto P.; Tsallis, Constantino

    2017-10-01

    We numerically study the two-dimensional, area preserving, web map. When the map is governed by ergodic behavior, it is, as expected, correctly described by Boltzmann-Gibbs statistics, based on the additive entropic functional SB G[p (x ) ] =-k ∫d x p (x ) lnp (x ) . In contrast, possible ergodicity breakdown and transitory sticky dynamical behavior drag the map into the realm of generalized q statistics, based on the nonadditive entropic functional Sq[p (x ) ] =k 1/-∫d x [p(x ) ] q q -1 (q ∈R ;S1=SB G ). We statistically describe the system (probability distribution of the sum of successive iterates, sensitivity to the initial condition, and entropy production per unit time) for typical values of the parameter that controls the ergodicity of the map. For small (large) values of the external parameter K , we observe q -Gaussian distributions with q =1.935 ⋯ (Gaussian distributions), like for the standard map. In contrast, for intermediate values of K , we observe a different scenario, due to the fractal structure of the trajectories embedded in the chaotic sea. Long-standing non-Gaussian distributions are characterized in terms of the kurtosis and the box-counting dimension of chaotic sea.

  5. qPR: An adaptive partial-report procedure based on Bayesian inference.

    PubMed

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-08-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.

  6. qPR: An adaptive partial-report procedure based on Bayesian inference

    PubMed Central

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-01-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045

  7. GENASIS Basics: Object-oriented utilitarian functionality for large-scale physics simulations (Version 2)

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2017-05-01

    GenASiS Basics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision -Version 2 of Basics - makes mostly minor additions to functionality and includes some simplifying name changes.

  8. Models for estimating photosynthesis parameters from in situ production profiles

    NASA Astrophysics Data System (ADS)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of photosynthesis irradiance functions and parameters for modeling in situ production profiles. In light of the results obtained in this work we argue that the choice of the primary production model should reflect the available data and these models should be data driven regarding parameter estimation.

  9. Batch Tests To Determine Activity Distribution and Kinetic Parameters for Acetate Utilization in Expanded-Bed Anaerobic Reactors

    PubMed Central

    Fox, Peter; Suidan, Makram T.

    1990-01-01

    Batch tests to measure maximum acetate utilization rates were used to determine the distribution of acetate utilizers in expanded-bed sand and expanded-bed granular activated carbon (GAC) reactors. The reactors were fed a mixture of acetate and 3-ethylphenol, and they contained the same predominant aceticlastic methanogen, Methanothrix sp. Batch tests were performed both on the entire reactor contents and with media removed from the reactors. Results indicated that activity was evenly distributed within the GAC reactors, whereas in the sand reactor a sludge blanket on top of the sand bed contained approximately 50% of the activity. The Monod half-velocity constant (Ks) for the acetate-utilizing methanogens in two expanded-bed GAC reactors was searched for by combining steady-state results with batch test data. All parameters necessary to develop a model with Monod kinetics were experimentally determined except for Ks. However, Ks was a function of the effluent 3-ethylphenol concentration, and batch test results demonstrated that maximum acetate utilization rates were not a function of the effluent 3-ethylphenol concentration. Addition of a competitive inhibition term into the Monod expression predicted the dependence of Ks on the effluent 3-ethylphenol concentration. A two-parameter search determined a Ks of 8.99 mg of acetate per liter and a Ki of 2.41 mg of 3-ethylphenol per liter. Model predictions were in agreement with experimental observations for all effluent 3-ethylphenol concentrations. Batch tests measured the activity for a specific substrate and determined the distribution of activity in the reactor. The use of steady-state data in conjunction with batch test results reduced the number of unknown kinetic parameters and thereby reduced the uncertainty in the results and the assumptions made. PMID:16348175

  10. Batch tests to determine activity distribution and kinetic parameters for acetate utilization in expanded-bed anaerobic reactors.

    PubMed

    Fox, P; Suidan, M T

    1990-04-01

    Batch tests to measure maximum acetate utilization rates were used to determine the distribution of acetate utilizers in expanded-bed sand and expanded-bed granular activated carbon (GAC) reactors. The reactors were fed a mixture of acetate and 3-ethylphenol, and they contained the same predominant aceticlastic methanogen, Methanothrix sp. Batch tests were performed both on the entire reactor contents and with media removed from the reactors. Results indicated that activity was evenly distributed within the GAC reactors, whereas in the sand reactor a sludge blanket on top of the sand bed contained approximately 50% of the activity. The Monod half-velocity constant (K(s)) for the acetate-utilizing methanogens in two expanded-bed GAC reactors was searched for by combining steady-state results with batch test data. All parameters necessary to develop a model with Monod kinetics were experimentally determined except for K(s). However, K(s) was a function of the effluent 3-ethylphenol concentration, and batch test results demonstrated that maximum acetate utilization rates were not a function of the effluent 3-ethylphenol concentration. Addition of a competitive inhibition term into the Monod expression predicted the dependence of K(s) on the effluent 3-ethylphenol concentration. A two-parameter search determined a K(s) of 8.99 mg of acetate per liter and a K(i) of 2.41 mg of 3-ethylphenol per liter. Model predictions were in agreement with experimental observations for all effluent 3-ethylphenol concentrations. Batch tests measured the activity for a specific substrate and determined the distribution of activity in the reactor. The use of steady-state data in conjunction with batch test results reduced the number of unknown kinetic parameters and thereby reduced the uncertainty in the results and the assumptions made.

  11. Application of Powder Diffraction Methods to the Analysis of Short- and Long-Range Atomic Order in Nanocrystalline Diamond and SiC: The Concept of the Apparent Lattice Parameter (alp)

    NASA Technical Reports Server (NTRS)

    Palosz, B.; Grzanka, E.; Gierlotka, S.; Stelmakh, S.; Pielaszek, R.; Bismayer, U.; Weber, H.-P.; Palosz, W.

    2003-01-01

    Two methods of the analysis of powder diffraction patterns of diamond and SiC nanocrystals are presented: (a) examination of changes of the lattice parameters with diffraction vector Q ('apparent lattice parameter', alp) which refers to Bragg scattering, and (b), examination of changes of inter-atomic distances based on the analysis of the atomic Pair Distribution Function, PDF. Application of these methods was studied based on the theoretical diffraction patterns computed for models of nanocrystals having (i) a perfect crystal lattice, and (ii), a core-shell structure, i.e. constituting a two-phase system. The models are defined by the lattice parameter of the grain core, thickness of the surface shell, and the magnitude and distribution of the strain field in the shell. X-ray and neutron experimental diffraction data of nanocrystalline SiC and diamond powders of the grain diameter from 4 nm up to micrometers were used. The effects of the internal pressure and strain at the grain surface on the structure are discussed based on the experimentally determined dependence of the alp values on the Q-vector, and changes of the interatomic distances with the grain size determined experimentally by the atomic Pair Distribution Function (PDF) analysis. The experimental results lend a strong support to the concept of a two-phase, core and the surface shell structure of nanocrystalline diamond and SiC.

  12. Modelling study on the three-dimensional neutron depolarisation response of the evolving ferrite particle size distribution during the austenite-ferrite phase transformation in steels

    NASA Astrophysics Data System (ADS)

    Fang, H.; van der Zwaag, S.; van Dijk, N. H.

    2018-07-01

    The magnetic configuration of a ferromagnetic system with mono-disperse and poly-disperse distribution of magnetic particles with inter-particle interactions has been computed. The analysis is general in nature and applies to all systems containing magnetically interacting particles in a non-magnetic matrix, but has been applied to steel microstructures, consisting of a paramagnetic austenite phase and a ferromagnetic ferrite phase, as formed during the austenite-to-ferrite phase transformation in low-alloyed steels. The characteristics of the computational microstructures are linked to the correlation function and determinant of depolarisation matrix, which can be experimentally obtained in three-dimensional neutron depolarisation (3DND). By tuning the parameters in the model used to generate the microstructure, we studied the effect of the (magnetic) particle size distribution on the 3DND parameters. It is found that the magnetic particle size derived from 3DND data matches the microstructural grain size over a wide range of volume fractions and grain size distributions. A relationship between the correlation function and the relative width of the particle size distribution was proposed to accurately account for the width of the size distribution. This evaluation shows that 3DND experiments can provide unique in situ information on the austenite-to-ferrite phase transformation in steels.

  13. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  14. Simulating and analyzing engineering parameters of Kyushu Earthquake, Japan, 1997, by empirical Green function method

    NASA Astrophysics Data System (ADS)

    Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei

    2017-03-01

    Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.

  15. On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Rincon, Rafael; Liao, Liang

    2003-01-01

    Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.

  16. Mass-number and excitation-energy dependence of the spin cutoff parameter

    DOE PAGES

    Grimes, S. M.; Voinov, A. V.; Massey, T. N.

    2016-07-12

    Here, the spin cutoff parameter determining the nuclear level density spin distribution ρ(J) is defined through the spin projection as < J 2 z > 1/2 or equivalently for spherical nuclei, (< J(J+1) >/3) 1/2. It is needed to divide the total level density into levels as a function of J. To obtain the total level density at the neutron binding energy from the s-wave resonance count, the spin cutoff parameter is also needed. The spin cutoff parameter has been calculated as a function of excitation energy and mass with a super-conducting Hamiltonian. Calculations have been compared with two commonlymore » used semiempirical formulas. A need for further measurements is also observed. Some complications for deformed nuclei are discussed. The quality of spin cut off parameter data derived from isomeric ratio measurement is examined.« less

  17. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  18. Exploring activity-driven network with biased walks

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Wu, Ding Juan; Lv, Fang; Su, Meng Long

    We investigate the concurrent dynamics of biased random walks and the activity-driven network, where the preferential transition probability is in terms of the edge-weighting parameter. We also obtain the analytical expressions for stationary distribution and the coverage function in directed and undirected networks, all of which depend on the weight parameter. Appropriately adjusting this parameter, more effective search strategy can be obtained when compared with the unbiased random walk, whether in directed or undirected networks. Since network weights play a significant role in the diffusion process.

  19. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  20. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  1. Speed and convergence properties of gradient algorithms for optimization of IMRT.

    PubMed

    Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe

    2004-05-01

    Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.

  2. Sub-poissonian photon statistics in the coherent state Jaynes-Cummings model in non-resonance

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-tai; Fan, An-fu

    1992-03-01

    We study a model with a two-level atom (TLA) non-resonance interacting with a single-mode quantized cavity field (QCF). The photon number probability function, the mean photon number and Mandel's fluctuation parameter are calculated. The sub-Poissonian distributions of the photon statistics are obtained in non-resonance interaction. This statistical properties are strongly dependent on the detuning parameters.

  3. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.

  4. Universal analytical scattering form factor for shell-, core-shell, or homogeneous particles with continuously variable density profile shape.

    PubMed

    Foster, Tobias

    2011-09-01

    A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society

  5. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  6. Statistics of initial density perturbations in heavy ion collisions and their fluid dynamic response

    NASA Astrophysics Data System (ADS)

    Floerchinger, Stefan; Wiedemann, Urs Achim

    2014-08-01

    An interesting opportunity to determine thermodynamic and transport properties in more detail is to identify generic statistical properties of initial density perturbations. Here we study event-by-event fluctuations in terms of correlation functions for two models that can be solved analytically. The first assumes Gaussian fluctuations around a distribution that is fixed by the collision geometry but leads to non-Gaussian features after averaging over the reaction plane orientation at non-zero impact parameter. In this context, we derive a three-parameter extension of the commonly used Bessel-Gaussian event-by-event distribution of harmonic flow coefficients. Secondly, we study a model of N independent point sources for which connected n-point correlation functions of initial perturbations scale like 1 /N n-1. This scaling is violated for non-central collisions in a way that can be characterized by its impact parameter dependence. We discuss to what extent these are generic properties that can be expected to hold for any model of initial conditions, and how this can improve the fluid dynamical analysis of heavy ion collisions.

  7. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  9. Bayesian Estimation of Reliability Burr Type XII Under Al-Bayyatis’ Suggest Loss Function with Numerical Solution

    NASA Astrophysics Data System (ADS)

    Mohammed, Amal A.; Abraheem, Sudad K.; Fezaa Al-Obedy, Nadia J.

    2018-05-01

    In this paper is considered with Burr type XII distribution. The maximum likelihood, Bayes methods of estimation are used for estimating the unknown scale parameter (α). Al-Bayyatis’ loss function and suggest loss function are used to find the reliability with the least loss. So the reliability function is expanded in terms of a set of power function. For this performance, the Matlab (ver.9) is used in computations and some examples are given.

  10. Bonded-cell model for particle fracture.

    PubMed

    Nguyen, Duc-Hanh; Azéma, Emilien; Sornay, Philippe; Radjai, Farhang

    2015-02-01

    Particle degradation and fracture play an important role in natural granular flows and in many applications of granular materials. We analyze the fracture properties of two-dimensional disklike particles modeled as aggregates of rigid cells bonded along their sides by a cohesive Mohr-Coulomb law and simulated by the contact dynamics method. We show that the compressive strength scales with tensile strength between cells but depends also on the friction coefficient and a parameter describing cell shape distribution. The statistical scatter of compressive strength is well described by the Weibull distribution function with a shape parameter varying from 6 to 10 depending on cell shape distribution. We show that this distribution may be understood in terms of percolating critical intercellular contacts. We propose a random-walk model of critical contacts that leads to particle size dependence of the compressive strength in good agreement with our simulation data.

  11. A size-structured model of bacterial growth and reproduction.

    PubMed

    Ellermeyer, S F; Pilyugin, S S

    2012-01-01

    We consider a size-structured bacterial population model in which the rate of cell growth is both size- and time-dependent and the average per capita reproduction rate is specified as a model parameter. It is shown that the model admits classical solutions. The population-level and distribution-level behaviours of these solutions are then determined in terms of the model parameters. The distribution-level behaviour is found to be different from that found in similar models of bacterial population dynamics. Rather than convergence to a stable size distribution, we find that size distributions repeat in cycles. This phenomenon is observed in similar models only under special assumptions on the functional form of the size-dependent growth rate factor. Our main results are illustrated with examples, and we also provide an introductory study of the bacterial growth in a chemostat within the framework of our model.

  12. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD).

    PubMed

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-07

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.

  13. Improvement to microphysical schemes in WRF Model based on observed data, part I: size distribution function

    NASA Astrophysics Data System (ADS)

    Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.

    2015-12-01

    In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit to 1-3-5 moments of the SDF was possible compared to fitting the 0-3-6 moment group.

  14. Comparison of multiplicity distributions to the negative binomial distribution in muon-proton scattering

    NASA Astrophysics Data System (ADS)

    Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badełek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Ftáčnik, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffré, M.; Jachołkowska, A.; Janata, F.; Jancsó, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettinghale, J.; Pietrzyk, B.; Pietrzyk, U.; Pönsgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Schneider, A.; Scholz, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; de La Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.

    1987-09-01

    The multiplicity distributions of charged hadrons produced in the deep inelastic muon-proton scattering at 280 GeV are analysed in various rapidity intervals, as a function of the total hadronic centre of mass energy W ranging from 4 20 GeV. Multiplicity distributions for the backward and forward hemispheres are also analysed separately. The data can be well parameterized by binomial distributions, extending their range of applicability to the case of lepton-proton scattering. The energy and the rapidity dependence of the parameters is presented and a smooth transition from the negative binomial distribution via Poissonian to the ordinary binomial is observed.

  15. Eddington's demon: inferring galaxy mass functions and other distributions from uncertain data

    NASA Astrophysics Data System (ADS)

    Obreschkow, D.; Murray, S. G.; Robotham, A. S. G.; Westmeier, T.

    2018-03-01

    We present a general modified maximum likelihood (MML) method for inferring generative distribution functions from uncertain and biased data. The MML estimator is identical to, but easier and many orders of magnitude faster to compute than the solution of the exact Bayesian hierarchical modelling of all measurement errors. As a key application, this method can accurately recover the mass function (MF) of galaxies, while simultaneously dealing with observational uncertainties (Eddington bias), complex selection functions and unknown cosmic large-scale structure. The MML method is free of binning and natively accounts for small number statistics and non-detections. Its fast implementation in the R-package dftools is equally applicable to other objects, such as haloes, groups, and clusters, as well as observables other than mass. The formalism readily extends to multidimensional distribution functions, e.g. a Choloniewski function for the galaxy mass-angular momentum distribution, also handled by dftools. The code provides uncertainties and covariances for the fitted model parameters and approximate Bayesian evidences. We use numerous mock surveys to illustrate and test the MML method, as well as to emphasize the necessity of accounting for observational uncertainties in MFs of modern galaxy surveys.

  16. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  17. Robust inference in the negative binomial regression model with an application to falls data.

    PubMed

    Aeberhard, William H; Cantoni, Eva; Heritier, Stephane

    2014-12-01

    A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.

  18. Parameter Estimation as a Problem in Statistical Thermodynamics.

    PubMed

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  19. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  20. Measuring Dark Matter With MilkyWay@home

    NASA Astrophysics Data System (ADS)

    Shelton, Siddhartha; Newberg, Heidi Jo; Arsenault, Matthew; Bauer, Jacob; Desell, Travis; Judd, Roland; Magdon-Ismail, Malik; Newby, Matthew; Rice, Colin; Thompson, Jeffrey; Ulin, Steve; Weiss, Jake; Widrow, Larry

    2016-01-01

    We perform N-body simulations of two component dwarf galaxies (dark matter and stars follow separate distributions) falling into the Milky Way and the forming of tidal streams. Using MilkyWay@home we optimize the parameters of the progenitor dwarf galaxy and the orbital time to fit the simulated distribution of stars along the tidal stream to the observed distribution of stars. Our initial dwarf galaxy models are constructed with two separate Plummer profiles (one for the dark matter and one for the baryonic matter), sampled using a generalized distribution function for spherically symmetric systems. We perform rigorous testing to ensure that our simulated galaxies are in virial equilibrium, and stable over a simulation time. The N-body simulations are performed using a Barnes-Hut Tree algorithm. Optimization traverses the likelihood surface from our six model parameters using particle swarm and differential evolution methods. We have generated simulated data with known model parameters that are similar to those of the Orphan Stream. We show that we are able to recover a majority of our model parameters, and most importantly the mass-to-light ratio of the now disrupted progenitor galaxy, using MilkyWay@home. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.

  1. Fractals, malware, and data models

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Potter, Andrew N.; Williams, Deborah; Handley, James W.

    2012-06-01

    We examine the hypothesis that the decision boundary between malware and non-malware is fractal. We introduce a novel encoding method derived from text mining for converting disassembled programs first into opstrings and then filter these into a reduced opcode alphabet. These opcodes are enumerated and encoded into real floating point number format and used for characterizing frequency of occurrence and distribution properties of malware functions to compare with non-malware functions. We use the concept of invariant moments to characterize the highly non-Gaussian structure of the opcode distributions. We then derive Data Model based classifiers from identified features and interpolate and extrapolate the parameter sample space for the derived Data Models. This is done to examine the nature of the parameter space classification boundary between families of malware and the general non-malware category. Preliminary results strongly support the fractal boundary hypothesis, and a summary of our methods and results are presented here.

  2. Prediction of crosslink density of solid propellant binders. [curing of elastomers

    NASA Technical Reports Server (NTRS)

    Marsh, H. E., Jr.

    1976-01-01

    A quantitative theory is outlined which allows calculation of crosslink density of solid propellant binders from a small number of predetermined parameters such as the binder composition, the functionality distributions of the ingredients, and the extent of the curing reaction. The parameter which is partly dependent on process conditions is the extent of reaction. The proposed theoretical model is verified by independent measurement of effective chain concentration and sol and gel fractions in simple compositions prepared from model compounds. The model is shown to correlate tensile data with composition in the case of urethane-cured polyether and certain solid propellants. A formula for the branching coefficient is provided according to which if one knows the functionality distributions of the ingredients and the corresponding equivalent weights and can measure or predict the extent of reaction, he can calculate the branching coefficient of such a system for any desired composition.

  3. New force field for molecular simulation of guanidinium-based ionic liquids.

    PubMed

    Liu, Xiaomin; Zhang, Suojiang; Zhou, Guohui; Wu, Guangwen; Yuan, Xiaoliang; Yao, Xiaoqian

    2006-06-22

    An all-atom force field was proposed for a new class of room temperature ionic liquids (RTILs), N,N,N',N'-tetramethylguanidinium (TMG) RTILs. The model is based on the AMBER force field with modifications on several parameters. The refinements include (1) fitting the vibration frequencies for obtaining force coefficients of bonds and angles against the data obtained by ab initio calculations and/or by experiments and (2) fitting the torsion energy profiles of dihedral angles for obtaining torsion parameters against the data obtained by ab initio calculations. To validate the force field, molecular dynamics (MD) simulations at different temperatures were performed for five kinds of RTILs, where TMG acts as a cation and formate, lactate, perchlorate, trifluoroacetate, and trifluoromethylsulfonate act as anions. The predicted densities were in good agreement with the experimental data. Radial distribution functions (RDFs) and spatial distribution functions (SDFs) were investigated to depict the microscopic structures of the RTILs.

  4. Dispersion in a thermal plasma including arbitrary degeneracy and quantum recoil.

    PubMed

    Melrose, D B; Mushtaq, A

    2010-11-01

    The longitudinal response function for a thermal electron gas is calculated including two quantum effects exactly, degeneracy, and the quantum recoil. The Fermi-Dirac distribution is expanded in powers of a parameter that is small in the nondegenerate limit and the response function is evaluated in terms of the conventional plasma dispersion function to arbitrary order in this parameter. The infinite sum is performed in terms of polylogarithms in the long-wavelength and quasistatic limits, giving results that apply for arbitrary degeneracy. The results are applied to the dispersion relations for Langmuir waves and to screening, reproducing known results in the nondegenerate and completely degenerate limits, and generalizing them to arbitrary degeneracy.

  5. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  6. Phenomenological characteristic of the electron component in gamma-quanta initiated showers

    NASA Technical Reports Server (NTRS)

    Nikolsky, S. I.; Stamenov, J. N.; Ushev, S. Z.

    1985-01-01

    The phenomenological characteristics of the electron component in showers initiated by primary gamma-quanta were analyzed on the basis of the Tien Shan experimental data. It is shown that the lateral distribution of the electrons ion gamma-quanta initiated showers can be described with NKG - function with age parameters bar S equals 0, 76 plus or minus 0, 02, different from the same parameter for normal showers with the same size bar S equals 0, 85 plus or minus 0, 01. The lateral distribution of the correspondent electron energy flux in gamma-quanta initiated showers is steeper as in normal cosmic ray showers.

  7. Computational study on the behaviors of granular materials under mechanical cycling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiaoliang; Ye, Minyou; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn

    2015-11-07

    Considering that fusion pebble beds are probably subjected to the cyclic compression excitation in their future applications, we presented a computational study to report the effect of mechanical cycling on the behaviors of granular matter. The correctness of our numerical experiments was confirmed by a comparison with the effective medium theory. Under the cyclic loads, the fast granular compaction was observed to evolve in a stretched exponential law. Besides, the increasing stiffening in packing structure, especially the decreasing moduli pressure dependence due to granular consolidation, was also observed. For the force chains inside the pebble beds, both the internal forcemore » distribution and the spatial distribution of force chains would become increasingly uniform as the external force perturbation proceeded and therefore produced the stress relief on grains. In this case, the originally proposed 3-parameter Mueth function was found to fail to describe the internal force distribution. Thereby, its improved functional form with 4 parameters was proposed here and proved to better fit the data. These findings will provide more detailed information on the pebble beds for the relevant fusion design and analysis.« less

  8. Bayes Factor Covariance Testing in Item Response Models.

    PubMed

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  9. The evolution of cooperation on geographical networks

    NASA Astrophysics Data System (ADS)

    Li, Yixiao; Wang, Yi; Sheng, Jichuan

    2017-11-01

    We study evolutionary public goods game on geographical networks, i.e., complex networks which are located on a geographical plane. The geographical feature effects in two ways: In one way, the geographically-induced network structure influences the overall evolutionary dynamics, and, in the other way, the geographical length of an edge influences the cost when the two players at the two ends interact. For the latter effect, we design a new cost function of cooperators, which simply assumes that the longer the distance between two players, the higher cost the cooperator(s) of them have to pay. In this study, network substrates are generated by a previous spatial network model with a cost-benefit parameter controlling the network topology. Our simulations show that the greatest promotion of cooperation is achieved in the intermediate regime of the parameter, in which empirical estimates of various railway networks fall. Further, we investigate how the distribution of edges' geographical costs influences the evolutionary dynamics and consider three patterns of the distribution: an approximately-equal distribution, a diverse distribution, and a polarized distribution. For normal geographical networks which are generated using intermediate values of the cost-benefit parameter, a diverse distribution hinders the evolution of cooperation, whereas a polarized distribution lowers the threshold value of the amplification factor for cooperation in public goods game. These results are helpful for understanding the evolution of cooperation on real-world geographical networks.

  10. Dosimetric characterizations of GZP6 60Co high dose rate brachytherapy sources: application of superimposition method

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani

    2012-01-01

    Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455

  11. Army Selective Reenlistment Bonus Management System: Functional and User Documentation

    DTIC Science & Technology

    2005-06-01

    Study Note 2005-04 Arlington, VA 22202-3926 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution is unlimited. 13...of the study, retention parameters that capture the financial incentive effects of the SRB reenlistment program were estimated for Army occupations...all possible outcomes in the Army SRB Data Utility. The Army SRB Program provides financial incentives for reenlistment that vary by occupational

  12. Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*

    NASA Astrophysics Data System (ADS)

    Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.

    2016-04-01

    The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.

  13. Analysis of corner cracks at hole by a 3-D weight function method with stresses from finite element method

    NASA Technical Reports Server (NTRS)

    Zhao, W.; Newman, J. C., Jr.; Sutton, M. A.; Wu, X. R.; Shivakumar, K. N.

    1995-01-01

    Stress intensity factors for quarter-elliptical corner cracks emanating from a circular hole are determined using a 3-D weight function method combined with a 3-D finite element method. The 3-D finite element method is used to analyze uncracked configuration and provide stress distribution in the region where crack is to occur. Using this stress distribution as input, the 3-D weight function method is used to determine stress intensity factors. Three different loading conditions, i.e. remote tension, remote bending and wedge loading, are considered for a wide range in geometrical parameters. The significance in using 3-D uncracked stress distribution and the difference between single and double corner cracks are studied. Typical crack opening displacements are also provided. Comparisons are made with solutions available in the literature.

  14. Polarized structure functions in a constituent quark scenario

    NASA Astrophysics Data System (ADS)

    Scopetta, Sergio; Vento, Vicente; Traini, Marco

    1998-12-01

    Using a simple picture of the constituent quark as a composite system of point-like partons, we construct the polarized parton distributions by a convolution between constituent quark momentum distributions and constituent quark structure functions. Using unpolarized data to fix the parameters we achieve good agreement with the polarization experiments for the proton, while not so for the neutron. By relaxing our assumptions for the sea distributions, we define new quark functions for the polarized case, which reproduce well the proton data and are in better agreement with the neutron data. When our results are compared with similar calculations using non-composite constituent quarks the accord with the experiments of the present scheme is impressive. We conclude that, also in the polarized case, DIS data are consistent with a low energy scenario dominated by composite constituents of the nucleon.

  15. The Grid[Way] Job Template Manager, a tool for parameter sweeping

    NASA Astrophysics Data System (ADS)

    Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.

    2011-04-01

    Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.

  16. Remote sensing of earth terrain

    NASA Technical Reports Server (NTRS)

    Kong, J. A.

    1988-01-01

    Two monographs and 85 journal and conference papers on remote sensing of earth terrain have been published, sponsored by NASA Contract NAG5-270. A multivariate K-distribution is proposed to model the statistics of fully polarimetric data from earth terrain with polarizations HH, HV, VH, and VV. In this approach, correlated polarizations of radar signals, as characterized by a covariance matrix, are treated as the sum of N n-dimensional random vectors; N obeys the negative binomial distribution with a parameter alpha and mean bar N. Subsequently, and n-dimensional K-distribution, with either zero or non-zero mean, is developed in the limit of infinite bar N or illuminated area. The probability density function (PDF) of the K-distributed vector normalized by its Euclidean norm is independent of the parameter alpha and is the same as that derived from a zero-mean Gaussian-distributed random vector. The above model is well supported by experimental data provided by MIT Lincoln Laboratory and the Jet Propulsion Laboratory in the form of polarimetric measurements.

  17. Whistler waves with electron temperature anisotropy and non-Maxwellian distribution functions

    NASA Astrophysics Data System (ADS)

    Malik, M. Usman; Masood, W.; Qureshi, M. N. S.; Mirza, Arshad M.

    2018-05-01

    The previous works on whistler waves with electron temperature anisotropy narrated the dependence on plasma parameters, however, they did not explore the reasons behind the observed differences. A comparative analysis of the whistler waves with different electron distributions has not been made to date. This paper attempts to address both these issues in detail by making a detailed comparison of the dispersion relations and growth rates of whistler waves with electron temperature anisotropy for Maxwellian, Cairns, kappa and generalized (r, q) distributions by varying the key plasma parameters for the problem under consideration. It has been found that the growth rate of whistler instability is maximum for flat-topped distribution whereas it is minimum for the Maxwellian distribution. This work not only summarizes and complements the previous work done on the whistler waves with electron temperature anisotropy but also provides a general framework to understand the linear propagation of whistler waves with electron temperature anisotropy that is applicable in all regions of space plasmas where the satellite missions have indicated their presence.

  18. Inverse statistical estimation via order statistics: a resolution of the ill-posed inverse problem of PERT scheduling

    NASA Astrophysics Data System (ADS)

    Pickard, William F.

    2004-10-01

    The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.

  19. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    PubMed

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  20. Model‐based analysis of the influence of catchment properties on hydrologic partitioning across five mountain headwater subcatchments

    PubMed Central

    Wagener, Thorsten; McGlynn, Brian

    2015-01-01

    Abstract Ungauged headwater basins are an abundant part of the river network, but dominant influences on headwater hydrologic response remain difficult to predict. To address this gap, we investigated the ability of a physically based watershed model (the Distributed Hydrology‐Soil‐Vegetation Model) to represent controls on metrics of hydrologic partitioning across five adjacent headwater subcatchments. The five study subcatchments, located in Tenderfoot Creek Experimental Forest in central Montana, have similar climate but variable topography and vegetation distribution. This facilitated a comparative hydrology approach to interpret how parameters that influence partitioning, detected via global sensitivity analysis, differ across catchments. Model parameters were constrained a priori using existing regional information and expert knowledge. Influential parameters were compared to perceptions of catchment functioning and its variability across subcatchments. Despite between‐catchment differences in topography and vegetation, hydrologic partitioning across all metrics and all subcatchments was sensitive to a similar subset of snow, vegetation, and soil parameters. Results also highlighted one subcatchment with low certainty in parameter sensitivity, indicating that the model poorly represented some complexities in this subcatchment likely because an important process is missing or poorly characterized in the mechanistic model. For use in other basins, this method can assess parameter sensitivities as a function of the specific ungauged system to which it is applied. Overall, this approach can be employed to identify dominant modeled controls on catchment response and their agreement with system understanding. PMID:27642197

  1. Quadratic Frequency Modulation Signals Parameter Estimation Based on Two-Dimensional Product Modified Parameterized Chirp Rate-Quadratic Chirp Rate Distribution.

    PubMed

    Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong

    2018-05-19

    In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.

  2. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    NASA Astrophysics Data System (ADS)

    Mantry, Sonny; Petriello, Frank

    2010-05-01

    We derive a factorization theorem for the Higgs boson transverse momentum (pT) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for mh≫pT≫ΛQCD, where mh denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the pT scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the pT-scale physics simplifies the implementation of higher order radiative corrections in αs(pT). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in pT/mh and ΛQCD/pT can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-pT resummation.

  3. Optical Imaging and Radiometric Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.

    2010-01-01

    OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge diffusion modulation transfer function (MTF).

  4. Seat pan and backrest pressure distribution while sitting in office chairs.

    PubMed

    Zemp, Roland; Taylor, William R; Lorenzetti, Silvio

    2016-03-01

    Nowadays, an increasing amount of time is spent seated, especially in office environments, where sitting comfort and support are increasingly important due to the prevalence of musculoskeletal disorders. The aim of this study was to develop a methodology for chair-specific sensor mat calibration, to evaluate the interconnections between specific pressure parameters and to establish those that are most meaningful and significant in order to differentiate pressure distribution measures between office chairs. The shape of the exponential calibration function was highly influenced by the material properties and geometry of the office chairs, and therefore a chair-specific calibration proved to be essential. High correlations were observed between the eight analysed pressure parameters, whereby the pressure parameters could be reduced to a set of four and three parameters for the seat pan and the backrest respectively. In order to find significant differences between office chairs, gradient parameters should be analysed for the seat pan, whereas for the backrest almost all parameters are suitable. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Determination of the mass function of extra-galactic GMCs via NIR color maps. Testing the method in a disk-like geometry

    NASA Astrophysics Data System (ADS)

    Kainulainen, J.; Juvela, M.; Alves, J.

    2007-06-01

    The giant molecular clouds (GMCs) of external galaxies can be mapped with sub-arcsecond resolution using multiband observations in the near-infrared. However, the interpretation of the observed reddening and attenuation of light, and their transformation into physical quantities, is greatly hampered by the effects arising from the unknown geometry and the scattering of light by dust particles. We examine the relation between the observed near-infrared reddening and the column density of the dust clouds. In this paper we particularly assess the feasibility of deriving the mass function of GMCs from near-infrared color excess data. We perform Monte Carlo radiative transfer simulations with 3D models of stellar radiation and clumpy dust distributions. We include the scattered light in the models and calculate near-infrared color maps from the simulated data. The color maps are compared with the true line-of-sight density distributions of the models. We extract clumps from the color maps and compare the observed mass function to the true mass function. For the physical configuration chosen in this study, essentially a face-on geometry, the observed mass function is a non-trivial function of the true mass function with a large number of parameters affecting its exact form. The dynamical range of the observed mass function is confined to 103.5dots 105.5 M_⊙ regardless of the dynamical range of the true mass function. The color maps are more sensitive in detecting the high-mass end of the mass function, and on average the masses of clouds are underestimated by a factor of ˜ 10 depending on the parameters describing the dust distribution. A significant fraction of clouds is expected to remain undetected at all masses. The simulations show that the cloud mass function derived from JHK color excess data using simple foreground screening geometry cannot be regarded as a one-to-one tracer of the underlying mass function.

  6. First-Principles Momentum-Dependent Local Ansatz Wavefunction and Momentum Distribution Function Bands of Iron

    NASA Astrophysics Data System (ADS)

    Kakehashi, Yoshiro; Chandra, Sumal

    2016-04-01

    We have developed a first-principles local ansatz wavefunction approach with momentum-dependent variational parameters on the basis of the tight-binding LDA+U Hamiltonian. The theory goes beyond the first-principles Gutzwiller approach and quantitatively describes correlated electron systems. Using the theory, we find that the momentum distribution function (MDF) bands of paramagnetic bcc Fe along high-symmetry lines show a large deviation from the Fermi-Dirac function for the d electrons with eg symmetry and yield the momentum-dependent mass enhancement factors. The calculated average mass enhancement m*/m = 1.65 is consistent with low-temperature specific heat data as well as recent angle-resolved photoemission spectroscopy (ARPES) data.

  7. Global survey of star clusters in the Milky Way. VI. Age distribution and cluster formation history

    NASA Astrophysics Data System (ADS)

    Piskunov, A. E.; Just, A.; Kharchenko, N. V.; Berczik, P.; Scholz, R.-D.; Reffert, S.; Yen, S. X.

    2018-06-01

    Context. The all-sky Milky Way Star Clusters (MWSC) survey provides uniform and precise ages, along with other relevant parameters, for a wide variety of clusters in the extended solar neighbourhood. Aims: In this study we aim to construct the cluster age distribution, investigate its spatial variations, and discuss constraints on cluster formation scenarios of the Galactic disk during the last 5 Gyrs. Methods: Due to the spatial extent of the MWSC, we have considered spatial variations of the age distribution along galactocentric radius RG, and along Z-axis. For the analysis of the age distribution we used 2242 clusters, which all lie within roughly 2.5 kpc of the Sun. To connect the observed age distribution to the cluster formation history we built an analytical model based on simple assumptions on the cluster initial mass function and on the cluster mass-lifetime relation, fit it to the observations, and determined the parameters of the cluster formation law. Results: Comparison with the literature shows that earlier results strongly underestimated the number of evolved clusters with ages t ≳ 100 Myr. Recent studies based on all-sky catalogues agree better with our data, but still lack the oldest clusters with ages t ≳ 1 Gyr. We do not observe a strong variation in the age distribution along RG, though we find an enhanced fraction of older clusters (t > 1 Gyr) in the inner disk. In contrast, the distribution strongly varies along Z. The high altitude distribution practically does not contain clusters with t < 1 Gyr. With simple assumptions on the cluster formation history, the cluster initial mass function and the cluster lifetime we can reproduce the observations. The cluster formation rate and the cluster lifetime are strongly degenerate, which does not allow us to disentangle different formation scenarios. In all cases the cluster formation rate is strongly declining with time, and the cluster initial mass function is very shallow at the high mass end.

  8. Estimating the Spatial Distribution of Groundwater Age Using Synoptic Surveys of Environmental Tracers in Streams

    NASA Astrophysics Data System (ADS)

    Gardner, W. P.

    2017-12-01

    A model which simulates tracer concentration in surface water as a function the age distribution of groundwater discharge is used to characterize groundwater flow systems at a variety of spatial scales. We develop the theory behind the model and demonstrate its application in several groundwater systems of local to regional scale. A 1-D stream transport model, which includes: advection, dispersion, gas exchange, first-order decay and groundwater inflow is coupled a lumped parameter model that calculates the concentration of environmental tracers in discharging groundwater as a function of the groundwater residence time distribution. The lumped parameters, which describe the residence time distribution, are allowed to vary spatially, and multiple environmental tracers can be simulated. This model allows us to calculate the longitudinal profile of tracer concentration in streams as a function of the spatially variable groundwater age distribution. By fitting model results to observations of stream chemistry and discharge, we can then estimate the spatial distribution of groundwater age. The volume of groundwater discharge to streams can be estimated using a subset of environmental tracers, applied tracers, synoptic stream gauging or other methods, and the age of groundwater then estimated using the previously calculated groundwater discharge and observed environmental tracer concentrations. Synoptic surveys of SF6, CFC's, 3H and 222Rn, along with measured stream discharge are used to estimate the groundwater inflow distribution and mean age for regional scale surveys of the Berland River in west-central Alberta. We find that groundwater entering the Berland has observable age, and that the age estimated using our stream survey is of similar order to limited samples from groundwater wells in the region. Our results show that the stream can be used as an easily accessible location to constrain the regional scale spatial distribution of groundwater age.

  9. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  10. Measurements of neutral and ion velocity distribution functions in a Hall thruster

    NASA Astrophysics Data System (ADS)

    Svarnas, Panagiotis; Romadanov, Iavn; Diallo, Ahmed; Raitses, Yevgeny

    2015-11-01

    Hall thruster is a plasma device for space propulsion. It utilizes a cross-field discharge to generate a partially ionized weakly collisional plasma with magnetized electrons and non-magnetized ions. The ions are accelerated by the electric field to produce the thrust. There is a relatively large number of studies devoted to characterization of accelerated ions, including measurements of ion velocity distribution function using laser-induced fluorescence diagnostic. Interactions of these accelerated ions with neutral atoms in the thruster and the thruster plume is a subject of on-going studies, which require combined monitoring of ion and neutral velocity distributions. Herein, laser-induced fluorescence technique has been employed to study neutral and single-charged ion velocity distribution functions in a 200 W cylindrical Hall thruster operating with xenon propellant. An optical system is installed in the vacuum chamber enabling spatially resolved axial velocity measurements. The fluorescence signals are well separated from the plasma background emission by modulating the laser beam and using lock-in detectors. Measured velocity distribution functions of neutral atoms and ions at different operating parameters of the thruster are reported and analyzed. This work was supported by DOE contract DE-AC02-09CH11466.

  11. Calibration of a distributed hydrologic model using observed spatial patterns from MODIS data

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet C.; González, Gorka M.; Mai, Juliane; Stisen, Simon

    2016-04-01

    Distributed hydrologic models are typically calibrated against streamflow observations at the outlet of the basin. Along with these observations from gauging stations, satellite based estimates offer independent evaluation data such as remotely sensed actual evapotranspiration (aET) and land surface temperature. The primary objective of the study is to compare model calibrations against traditional downstream discharge measurements with calibrations against simulated spatial patterns and combinations of both types of observations. While the discharge based model calibration typically improves the temporal dynamics of the model, it seems to give rise to minimum improvement of the simulated spatial patterns. In contrast, objective functions specifically targeting the spatial pattern performance could potentially increase the spatial model performance. However, most modeling studies, including the model formulations and parameterization, are not designed to actually change the simulated spatial pattern during calibration. This study investigates the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale hydrologic model (mHM). This model is selected as it allows for a change in the spatial distribution of key soil parameters through the optimization of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) values directly as input. In addition the simulated aET can be estimated at a spatial resolution suitable for comparison to the spatial patterns observed with MODIS data. To increase our control on spatial calibration we introduced three additional parameters to the model. These new parameters are part of an empirical equation to the calculate crop coefficient (Kc) from daily LAI maps and used to update potential evapotranspiration (PET) as model inputs. This is done instead of correcting/updating PET with just a uniform (or aspect driven) factor used in the mHM model (version 5.3). We selected the 20 most important parameters out of 53 mHM parameters based on a comprehensive sensitivity analysis (Cuntz et al., 2015). We calibrated 1km-daily mHM for the Skjern basin in Denmark using the Shuffled Complex Evolution (SCE) algorithm and inputs at different spatial scales i.e. meteorological data at 10km and morphological data at 250 meters. We used correlation coefficients between observed monthly (summer months only) MODIS data calculated from cloud free days over the calibration period from 2001 to 2008 and simulated aET from mHM over the same period. Similarly other metrics, e.g mapcurves and fraction skill-score, are also included in our objective function to assess the co-location of the grid-cells. The preliminary results show that multi-objective calibration of mHM against observed streamflow and spatial patterns together does not significantly reduce the spatial errors in aET while it improves the streamflow simulations. This is a strong signal for further investigation of the multi parameter regionalization affecting spatial aET patterns and weighting the spatial metrics in the objective function relative to the streamflow metrics.

  12. On the Relationship Between Transfer Function-derived Response Times and Hydrograph Analysis Timing Parameters: Are there Similarities?

    NASA Astrophysics Data System (ADS)

    Bansah, S.; Ali, G.; Haque, M. A.; Tang, V.

    2017-12-01

    The proportion of precipitation that becomes streamflow is a function of internal catchment characteristics - which include geology, landscape characteristics and vegetation - and influence overall storage dynamics. The timing and quantity of water discharged by a catchment are indeed embedded in event hydrographs. Event hydrograph timing parameters, such as the response lag and time of concentration, are important descriptors of how long it takes the catchment to respond to input precipitation and how long it takes the latter to filter through the catchment. However, the extent to which hydrograph timing parameters relate to average response times derived from fitting transfer functions to annual hydrographs is unknown. In this study, we used a gamma transfer function to determine catchment average response times as well as event-specific hydrograph parameters across a network of eight nested watersheds ranging from 0.19 km2 to 74.6 km2 prairie catchments located in south central Manitoba (Canada). Various statistical analyses were then performed to correlate average response times - estimated using the parameters of the fitted gamma transfer function - to event-specific hydrograph parameters. Preliminary results show significant interannual variations in response times and hydrograph timing parameters: the former were in the order of a few hours to days, while the latter ranged from a few days to weeks. Some statistically significant relationships were detected between response times and event-specific hydrograph parameters. Future analyses will involve the comparison of statistical distributions of event-specific hydrograph parameters with that of runoff response times and baseflow transit times in order to quantity catchment storage dynamics across a range of temporal scales.

  13. Comparison of Radiation Pressure Perturbations on Rocket Bodies and Debris at Geosynchronous Earth Orbit

    DTIC Science & Technology

    2014-09-01

    has highlighted the need for physically consistent radiation pressure and Bidirectional Reflectance Distribution Function ( BRDF ) models . This paper...seeks to evaluate the impact of BRDF -consistent radiation pres- sure models compared to changes in the other BRDF parameters. The differences in...orbital position arising because of changes in the shape, attitude, angular rates, BRDF parameters, and radiation pressure model are plotted as a

  14. Electro-osmotic flow of couple stress fluids in a micro-channel propagated by peristalsis

    NASA Astrophysics Data System (ADS)

    Tripathi, Dharmendra; Yadav, Ashu; Anwar Bég, O.

    2017-04-01

    A mathematical model is developed for electro-osmotic peristaltic pumping of a non-Newtonian liquid in a deformable micro-channel. Stokes' couple stress fluid model is employed to represent realistic working liquids. The Poisson-Boltzmann equation for electric potential distribution is implemented owing to the presence of an electrical double layer (EDL) in the micro-channel. Using long wavelength, lubrication theory and Debye-Huckel approximations, the linearized transformed dimensionless boundary value problem is solved analytically. The influence of electro-osmotic parameter (inversely proportional to Debye length), maximum electro-osmotic velocity (a function of external applied electrical field) and couple stress parameter on axial velocity, volumetric flow rate, pressure gradient, local wall shear stress and stream function distributions is evaluated in detail with the aid of graphs. The Newtonian fluid case is retrieved as a special case with vanishing couple stress effects. With increasing the couple stress parameter there is a significant increase in the axial pressure gradient whereas the core axial velocity is reduced. An increase in the electro-osmotic parameter both induces flow acceleration in the core region (around the channel centreline) and it also enhances the axial pressure gradient substantially. The study is relevant in the simulation of novel smart bio-inspired space pumps, chromatography and medical micro-scale devices.

  15. A Functional Varying-Coefficient Single-Index Model for Functional Response Data

    PubMed Central

    Li, Jialiang; Huang, Chao; Zhu, Hongtu

    2016-01-01

    Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. PMID:29200540

  16. A Functional Varying-Coefficient Single-Index Model for Functional Response Data.

    PubMed

    Li, Jialiang; Huang, Chao; Zhu, Hongtu

    2017-01-01

    Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer's Disease Neuroimaging Initiative (ADNI) study.

  17. Simulation of perturbation produced by an absorbing spherical body in collisionless plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasovsky, V. L., E-mail: vkrasov@iki.rssi.ru; Kiselyov, A. A., E-mail: alexander.kiselyov@stonehenge-3.net.ru; Dolgonosov, M. S.

    2017-01-15

    A steady plasma state reached in the course of charging of an absorbing spherical body is found using computational methods. Numerical simulations provide complete information on this process, thereby allowing one to find the spatiotemporal dependences of the physical quantities and observe the kinetic phenomena accompanying the formation of stable electron and ion distributions in phase space. The distribution function of trapped ions is obtained, and their contribution to the screening of the charged sphere is determined. The sphere charge and the charge of the trapped-ion cloud are determined as functions of the unperturbed plasma parameters.

  18. Probabilistic analysis of bladed turbine disks and the effect of mistuning

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Nagpal, V. K.; Chamis, Christos C.

    1990-01-01

    Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping are included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.

  19. Probabilistic analysis of bladed turbine disks and the effect of mistuning

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin; Nagpal, V. K.; Chamis, C. C.

    1990-01-01

    Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping have been included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.

  20. Generalised quasiprobability distribution for Hermite polynomial squeezed states

    NASA Astrophysics Data System (ADS)

    Datta, Sunil; D'Souza, Richard

    1996-02-01

    Generalized quasiprobability distributions (QPD) for Hermite polynomial states are presented. These states are solutions of an eigenvalue equation which is quadratic in creation and annihilation operators. Analytical expressions for the QPD are presented for some special cases of the eigenvalues. For large squeezing these analytical expressions for the QPD take the form of a finite series in even Hermite functions. These expressions very transparently exhibit the transition between, P, Q and W functions corresponding to the change of the s-parameter of the QPD. Further, they clearly show the two-photon nature of the processes involved in the generation of these states.

  1. Algorithms and physical parameters involved in the calculation of model stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Merlo, D. C.

    This contribution summarizes the Doctoral Thesis presented at Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba for the degree of PhD in Astronomy. We analyze some algorithms and physical parameters involved in the calculation of model stellar atmospheres, such as atomic partition functions, functional relations connecting gaseous and electronic pressure, molecular formation, temperature distribution, chemical compositions, Gaunt factors, atomic cross-sections and scattering sources, as well as computational codes for calculating models. Special attention is paid to the integration of hydrostatic equation. We compare our results with those obtained by other authors, finding reasonable agreement. We make efforts on the implementation of methods that modify the originally adopted temperature distribution in the atmosphere, in order to obtain constant energy flux throughout. We find limitations and we correct numerical instabilities. We integrate the transfer equation solving directly the integral equation involving the source function. As a by-product, we calculate updated atomic partition functions of the light elements. Also, we discuss and enumerate carefully selected formulae for the monochromatic absorption and dispersion of some atomic and molecular species. Finally, we obtain a flexible code to calculate model stellar atmospheres.

  2. Simple reaction time in 8-9-year old children environmentally exposed to PCBs.

    PubMed

    Šovčíková, Eva; Wimmerová, Soňa; Strémy, Maximilián; Kotianová, Janette; Loffredo, Christopher A; Murínová, Ľubica Palkovičová; Chovancová, Jana; Čonka, Kamil; Lancz, Kinga; Trnovec, Tomáš

    2015-12-01

    Simple reaction time (SRT) has been studied in children exposed to polychlorinated biphenyls (PCBs), with variable results. In the current work we examined SRT in 146 boys and 161 girls, aged 8.53 ± 0.65 years (mean ± SD), exposed to PCBs in the environment of eastern Slovakia. We divided the children into tertiles with regard to increasing PCB serum concentration. The mean ± SEM serum concentration of the sum of 15 PCB congeners was 191.15 ± 5.39, 419.23 ± 8.47, and 1315.12 ± 92.57 ng/g lipids in children of the first, second, and third tertiles, respectively. We created probability distribution plots for each child from their multiple trials of the SRT testing. We fitted response time distributions from all valid trials with the ex-Gaussian function, a convolution of a normal and an additional exponential function, providing estimates of three independent parameters μ, σ, and τ. μ is the mean of the normal component, σ is the standard deviation of the normal component, and τ is the mean of the exponential component. Group response time distributions were calculated using the Vincent averaging technique. A Q-Q plot comparing probability distribution of the first vs. third tertile indicated that deviation of the quantiles of the latter tertile from those of the former begins at the 40th percentile and does not show a positive acceleration. This was confirmed in comparison of the ex-Gaussian parameters of these two tertiles adjusted for sex, age, Raven IQ of the child, mother's and father's education, behavior at home and school, and BMI: the results showed that the parameters μ and τ significantly (p ≤ 0.05) increased with PCB exposure. Similar increases of the ex-Gaussian parameter τ in children suffering from ADHD have been previously reported and interpreted as intermittent attentional lapses, but were not seen in our cohort. Our study has confirmed that environmental exposure of children to PCBs is associated with prolongation of simple reaction time reflecting impairment of cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Trend analysis for daily rainfall series of Barcelona

    NASA Astrophysics Data System (ADS)

    Ortego, M. I.; Gibergans-Báguena, J.; Tolosana-Delgado, R.; Egozcue, J. J.; Llasat, M. C.

    2009-09-01

    Frequency analysis of hydrological series is a key point to acquire an in-depth understanding of the behaviour of hydrologic events. The occurrence of extreme hydrologic events in an area may imply great social and economical impacts. A good understanding of hazardous events improves the planning of human activities. A useful model for hazard assessment of extreme hydrologic events in an area is the point-over-threshold (POT) model. Time-occurrence of events is assumed to be Poisson distributed, and the magnitude X of each event is modeled as an arbitrary random variable, whose excesses over the threshold x0, Y = X - x0, given X > x0, have a Generalized Pareto Distribution (GPD), ( ? )- 1? FY (y|β,?) = 1 - 1+ βy , 0 ? y < ysup , where ysup = +? if ? 0, and ysup = -β? ? if ? < 0. The limiting distribution for ? = 0 is an exponential one. Independence between this magnitude and occurrence in time is assumed, as well as independence from event to event. In order to take account for uncertainty of the estimation of the GPD parameters, a Bayesian approach is chosen. This approach allows to include necessary conditions on the parameters of the distribution for our particular phenomena, as well as propagate adequately the uncertainty of estimations to the hazard parameters, such as return periods. A common concern is to know whether magnitudes of hazardous events have changed in the last decades. Long data series are very appreciated in order to properly study these issues. The series of daily rainfall in Barcelona (1854-2006) has been selected. This is one of the longer european daily rainfall series available. Daily rainfall is better described using a relative scale and therefore it is suitably treated in a log-scale. Accordingly, log-precipitation is identified with X. Excesses over a threshold are modeled by a GPD with a limited maximum value. An additional assumption is that the distribution of the excesses Y has limited upper tail and, therefore, ? < 0, ysup = -β?. Such a long data series provides valuable information about the phenomena on hand, and therefore a very first step is to have a look to its reliability. The first part of the work focuses on the possible existence of abrupt changes in the parameters of the GPD. These abrupt changes may be due to changes in the location of the observatories and/or technological advances introduced in the measuring instruments. The second part of the work examines the possible existence of trends. The parameters of the model are considered as a function of time. A new parameterisation of the GPD distribution is suggested, in order to parsimoniously deal with this climate variation, ? = ln(-? ?;β) and ? = ln(-? ? β) The classical scale and shape parameters of the GPD (β,?) are reformulated as a location parameter ? "linked to the upper limit of the distribution", and a shape parameter ?. In this reparameterisation, the parsimonious choice is to consider shape as a linear function of time, ?(t) = ?0 + t? while keeping location fixed, ?(t) = ?0. Then, the climate change is assessed by checking the hypothesis ? 0. Results show no significant abrupt changes in excesses distribution of the Barcelona daily rainfall series but suggest a significant change for the parameters, and therefore the existence of a trend in daily rainfall for this period.

  4. Coupled semivariogram uncertainty of hydrogeological and geophysical data on capture zone uncertainty analysis

    USGS Publications Warehouse

    Rahman, A.; Tsai, F.T.-C.; White, C.D.; Willson, C.S.

    2008-01-01

    This study investigates capture zone uncertainty that relates to the coupled semivariogram uncertainty of hydrogeological and geophysical data. Semivariogram uncertainty is represented by the uncertainty in structural parameters (range, sill, and nugget). We used the beta distribution function to derive the prior distributions of structural parameters. The probability distributions of structural parameters were further updated through the Bayesian approach with the Gaussian likelihood functions. Cokriging of noncollocated pumping test data and electrical resistivity data was conducted to better estimate hydraulic conductivity through autosemivariograms and pseudo-cross-semivariogram. Sensitivities of capture zone variability with respect to the spatial variability of hydraulic conductivity, porosity and aquifer thickness were analyzed using ANOVA. The proposed methodology was applied to the analysis of capture zone uncertainty at the Chicot aquifer in Southwestern Louisiana, where a regional groundwater flow model was developed. MODFLOW-MODPATH was adopted to delineate the capture zone. The ANOVA results showed that both capture zone area and compactness were sensitive to hydraulic conductivity variation. We concluded that the capture zone uncertainty due to the semivariogram uncertainty is much higher than that due to the kriging uncertainty for given semivariograms. In other words, the sole use of conditional variances of kriging may greatly underestimate the flow response uncertainty. Semivariogram uncertainty should also be taken into account in the uncertainty analysis. ?? 2008 ASCE.

  5. Distribution function of random strains in an elastically anisotropic continuum and defect strengths of T m3 + impurity ions in crystals with zircon structure

    NASA Astrophysics Data System (ADS)

    Malkin, B. Z.; Abishev, N. M.; Baibekov, E. I.; Pytalev, D. S.; Boldyrev, K. N.; Popova, M. N.; Bettinelli, M.

    2017-07-01

    We construct a distribution function of the strain-tensor components induced by point defects in an elastically anisotropic continuum, which can be used to account quantitatively for many effects observed in different branches of condensed matter physics. Parameters of the derived six-dimensional generalized Lorentz distribution are expressed through the integrals computed over the array of strains. The distribution functions for the cubic diamond and elpasolite crystals and tetragonal crystals with the zircon and scheelite structures are presented. Our theoretical approach is supported by a successful modeling of specific line shapes of singlet-doublet transitions of the T m3 + ions doped into AB O4 (A =Y , Lu; B =P , V) crystals with zircon structure, observed in high-resolution optical spectra. The values of the defect strengths of impurity T m3 + ions in the oxygen surroundings, obtained as a result of this modeling, can be used in future studies of random strains in different rare-earth oxides.

  6. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  7. Quantum field theory in the presence of a medium: Green's function expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kheirandish, Fardin; Salimi, Shahriar

    2011-12-15

    Starting from a Lagrangian and using functional-integration techniques, series expansions of Green's function of a real scalar field and electromagnetic field, in the presence of a medium, are obtained. The parameter of expansion in these series is the susceptibility function of the medium. Relativistic and nonrelativistic Langevin-type equations are derived. Series expansions for Lifshitz energy in finite temperature and for an arbitrary matter distribution are derived. Covariant formulations for both scalar and electromagnetic fields are introduced. Two illustrative examples are given.

  8. Combining satellite data and appropriate objective functions for improved spatial pattern performance of a distributed hydrologic model

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet C.; Mai, Juliane; Mendiguren, Gorka; Koch, Julian; Samaniego, Luis; Stisen, Simon

    2018-02-01

    Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET) are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM) is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the shuffled complex evolution optimiser. The calibration results reveal a limited trade-off between streamflow dynamics and spatial patterns illustrating the benefit of combining separate observation types and objective functions. At the same time, the simulated spatial patterns of AET significantly improved when an objective function based on observed AET patterns and a novel spatial performance metric compared to traditional streamflow-only calibration were included. Since the overall water balance is usually a crucial goal in hydrologic modelling, spatial-pattern-oriented optimisation should always be accompanied by traditional discharge measurements. In such a multi-objective framework, the current study promotes the use of a novel bias-insensitive spatial pattern metric, which exploits the key information contained in the observed patterns while allowing the water balance to be informed by discharge observations.

  9. A Field Study of Pixel-Scale Variability of Raindrop Size Distribution in the MidAtlantic Region

    NASA Technical Reports Server (NTRS)

    Tokay, Ali; D'adderio, Leo Pio; Wolff, David P.; Petersen, Walter A.

    2016-01-01

    The spatial variability of parameters of the raindrop size distribution and its derivatives is investigated through a field study where collocated Particle Size and Velocity (Parsivel2) and two-dimensional video disdrometers were operated at six sites at Wallops Flight Facility, Virginia, from December 2013 to March 2014. The three-parameter exponential function was employed to determine the spatial variability across the study domain where the maximum separation distance was 2.3 km. The nugget parameter of the exponential function was set to 0.99 and the correlation distance d0 and shape parameter s0 were retrieved by minimizing the root-mean-square error, after fitting it to the correlations of physical parameters. Fits were very good for almost all 15 physical parameters. The retrieved d0 and s0 were about 4.5 km and 1.1, respectively, for rain rate (RR) when all 12 disdrometers were reporting rainfall with a rain-rate threshold of 0.1 mm h1 for 1-min averages. The d0 decreased noticeably when one or more disdrometers were required to report rain. The d0 was considerably different for a number of parameters (e.g., mass-weighted diameter) but was about the same for the other parameters (e.g., RR) when rainfall threshold was reset to 12 and 18 dBZ for Ka- and Ku-band reflectivity, respectively, following the expected Global Precipitation Measurement missions spaceborne radar minimum detectable signals. The reduction of the database through elimination of a site did not alter d0 as long as the fit was adequate. The correlations of 5-min rain accumulations were lower when disdrometer observations were simulated for a rain gauge at different bucket sizes.

  10. Determination of the parton distributions and structure functions of the proton from neutrino and antineutrino reactions on hydrogen and deuterium

    NASA Astrophysics Data System (ADS)

    Jones, G. T.; Jones, R. W. L.; Kennedy, B. W.; Klein, H.; Morrison, D. R. O.; Wachsmuth, H.; Miller, D. B.; Mobayyen, M. M.; Wainstein, S.; Aderholz, M.; Hantke, D.; Katz, U. F.; Kern, J.; Schmitz, N.; Wittek, W.; Borner, H. P.; Myatt, G.; Cooper-Sarkar, A. M.; Guy, J.; Venus, W.; Bullock, F. W.; Burke, S.

    1994-12-01

    This analysis is based on data from neutrino and antineutrino scattering on hydrogen and deuterium, obtained with BEBC in the (anti) neutrino wideband beam of the CERN SPS. The parton momentum distributions in the proton and the proton structure functions are determined in the range 0.01

  11. Geometry of the q-exponential distribution with dependent competing risks and accelerated life testing

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Shi, Yimin; Wang, Ruibing

    2017-02-01

    In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).

  12. Extracting the pair distribution function of liquids and liquid-vapor surfaces by grazing incidence x-ray diffraction mode.

    PubMed

    Vaknin, David; Bu, Wei; Travesset, Alex

    2008-07-28

    We show that the structure factor S(q) of water can be obtained from x-ray synchrotron experiments at grazing angle of incidence (in reflection mode) by using a liquid surface diffractometer. The corrections used to obtain S(q) self-consistently are described. Applying these corrections to scans at different incident beam angles (above the critical angle) collapses the measured intensities into a single master curve, without fitting parameters, which within a scale factor yields S(q). Performing the measurements below the critical angle for total reflectivity yields the structure factor of the top most layers of the water/vapor interface. Our results indicate water restructuring at the vapor/water interface. We also introduce a new approach to extract g(r), the pair distribution function (PDF), by expressing the PDF as a linear sum of error functions whose parameters are refined by applying a nonlinear least square fit method. This approach enables a straightforward determination of the inherent uncertainties in the PDF. Implications of our results to previously measured and theoretical predictions of the PDF are also discussed.

  13. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  14. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  15. A novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China

    PubMed Central

    Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing

    2016-01-01

    Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108

  16. Excitation functions of parameters extracted from three-source (net-)proton rapidity distributions in Au-Au and Pb-Pb collisions over an energy range from AGS to RHIC

    NASA Astrophysics Data System (ADS)

    Gao, Li-Na; Liu, Fu-Hu; Sun, Yan; Sun, Zhu; Lacey, Roy A.

    2017-03-01

    Experimental results of the rapidity spectra of protons and net-protons (protons minus antiprotons) emitted in gold-gold (Au-Au) and lead-lead (Pb-Pb) collisions, measured by a few collaborations at the alternating gradient synchrotron (AGS), super proton synchrotron (SPS), and relativistic heavy ion collider (RHIC), are described by a three-source distribution. The values of the distribution width σC and fraction kC of the central rapidity region, and the distribution width σF and rapidity shift Δ y of the forward/backward rapidity regions, are then obtained. The excitation function of σC increases generally with increase of the center-of-mass energy per nucleon pair √{s_{NN}}. The excitation function of σF shows a saturation at √{s_{NN}}=8.8 GeV. The excitation function of kC shows a minimum at √{s_{NN}}=8.8 GeV and a saturation at √{s_{NN}} ≈ 17 GeV. The excitation function of Δ y increases linearly with ln(√{s_{NN}}) in the considered energy range.

  17. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    USGS Publications Warehouse

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  18. Reproducibility of structural strength and stiffness for graphite-epoxy aircraft spoilers

    NASA Technical Reports Server (NTRS)

    Howell, W. E.; Reese, C. D.

    1978-01-01

    Structural strength reproducibility of graphite epoxy composite spoilers for the Boeing 737 aircraft was evaluated by statically loading fifteen spoilers to failure at conditions simulating aerodynamic loads. Spoiler strength and stiffness data were statistically modeled using a two parameter Weibull distribution function. Shape parameter values calculated for the composite spoiler strength and stiffness were within the range of corresponding shape parameter values calculated for material property data of composite laminates. This agreement showed that reproducibility of full scale component structural properties was within the reproducibility range of data from material property tests.

  19. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  20. Analytical performance evaluation of SAR ATR with inaccurate or estimated models

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.

    2004-09-01

    Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.

  1. Determination of the Nucleon-Nucleon Interaction in the ImQMD Model by Nuclear Reactions at Fermi Energy

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Tian, Jun-Long; Wang, Ning

    2013-11-01

    The nucleon-nucleon interaction is investigated by using the ImQMD model with the three sets of parameters IQ1, IQ2 and IQ3 in which the corresponding incompressibility coefficients of nuclear matter are different. Fusion excitation function and the charge distribution of fragments are calculated for reaction systems 40Ca+40Ca at different incident energies. It is found that obvious differences in the charge distribution were observed at the energy region 10-25A MeV by adopting the three sets of parameters, while the results were close to each other at energy region of 30-45A MeV for the reaction system. It indicates that the Fermi energy region is a sensitive energy region to explore the N-N interaction. The fragment multiplicity spectrum for 238U+197Au at 15A MeV are reproduced by the ImQMD model with the set of parameter IQ3. It is concluded that charge distribution of the fragments and the fragment multiplicity spectrum are good observables for studying N-N interaction, and IQ3 is a suitable set of parameters for the ImQMD model.

  2. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  3. A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.

    2018-05-01

    Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.

  4. LIMEPY: Lowered Isothermal Model Explorer in PYthon

    NASA Astrophysics Data System (ADS)

    Gieles, Mark; Zocchi, Alice

    2017-10-01

    LIMEPY solves distribution function (DF) based lowered isothermal models. It solves Poisson's equation used on input parameters and offers fast solutions for isotropic/anisotropic, single/multi-mass models, normalized DF values, density and velocity moments, projected properties, and generates discrete samples.

  5. Advanced quantitative methods in correlating sarcopenic muscle degeneration with lower extremity function biometrics and comorbidities

    PubMed Central

    Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo

    2018-01-01

    Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66–96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges’ Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard. PMID:29513690

  6. Advanced quantitative methods in correlating sarcopenic muscle degeneration with lower extremity function biometrics and comorbidities.

    PubMed

    Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo

    2018-01-01

    Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard.

  7. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  8. Collisional evolution - an analytical study for the nonsteady-state mass distribution

    NASA Astrophysics Data System (ADS)

    Martins, R. Vieira

    1999-05-01

    To study the collisional evolution of asteroidal groups we can use an analytical solutionfor the self-similar collision cascades. This solution is suitable to study the steady-state massdistribution of the collisional fragmentation. However, out of the steady-state conditions, thissolution is not satisfactory for some values of the collisional parameters. In fact, for some valuesfor the exponent of the mass distribution power law of an asteroidal group and its relation to theexponent of the function which describes how rocks break we arrive at singular points for theequation which describes the collisional evolution. These singularities appear since someapproximations are usually made in the laborious evaluation of many integrals that appear in theanalytical calculations. They concern the cutoff for the smallest and the largest bodies. Thesesingularities set some restrictions to the study of the analytical solution for the collisionalequation. To overcome these singularities we performed an algebraic computationconsidering the smallest and the largest bodies and we obtained the analytical expressions for theintegrals that describe the collisional evolution without restriction on the parameters. However,the new distribution is more sensitive to the values of the collisional parameters. In particular thesteady-state solution for the differential mass distribution has exponents slightly different from11⧸6 for the usual parameters in the Asteroid Belt. The sensitivity of this distribution with respectto the parameters is analyzed for the usual values in the asteroidal groups. With anexpression for the mass distribution without singularities, we can evaluate also its time evolution.We arrive at an analytical expression given by a power series of terms constituted by a smallparameter multiplied by the mass to an exponent, which depends on the initial power lawdistribution. This expression is a formal solution for the equation which describes the collisionalevolution. Furthermore, the first-order term for this solution is the time rate of the distribution atthe initial time. In particular the solution shows the fundamental importance played by theexponent of the power law initial condition in the evolution of the system.

  9. Architectures of Kepler Planet Systems with Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Morehead, Robert C.; Ford, Eric B.

    2015-12-01

    The distribution of period normalized transit duration ratios among Kepler’s multiple transiting planet systems constrains the distributions of mutual orbital inclinations and orbital eccentricities. However, degeneracies in these parameters tied to the underlying number of planets in these systems complicate their interpretation. To untangle the true architecture of planet systems, the mutual inclination, eccentricity, and underlying planet number distributions must be considered simultaneously. The complexities of target selection, transit probability, detection biases, vetting, and follow-up observations make it impractical to write an explicit likelihood function. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC generates a sample of trial population parameters from a prior distribution to produce synthetic datasets via a physically-motivated forward model. Samples are then accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We build on the considerable progress from the field of statistics to develop sequential algorithms for performing ABC in an efficient and flexible manner. We demonstrate the utility of ABC in exoplanet populations and present new constraints on the distributions of mutual orbital inclinations, eccentricities, and the relative number of short-period planets per star. We conclude with a discussion of the implications for other planet occurrence rate calculations, such as eta-Earth.

  10. Correlation functions in first-order phase transitions

    NASA Astrophysics Data System (ADS)

    Garrido, V.; Crespo, D.

    1997-09-01

    Most of the physical properties of systems underlying first-order phase transitions can be obtained from the spatial correlation functions. In this paper, we obtain expressions that allow us to calculate all the correlation functions from the droplet size distribution. Nucleation and growth kinetics is considered, and exact solutions are obtained for the case of isotropic growth by using self-similarity properties. The calculation is performed by using the particle size distribution obtained by a recently developed model (populational Kolmogorov-Johnson-Mehl-Avrami model). Since this model is less restrictive than that used in previously existing theories, the result is that the correlation functions can be obtained for any dependence of the kinetic parameters. The validity of the method is tested by comparison with the exact correlation functions, which had been obtained in the available cases by the time-cone method. Finally, the correlation functions corresponding to the microstructure developed in partitioning transformations are obtained.

  11. Application of Cox model in coagulation function in patients with primary liver cancer.

    PubMed

    Guo, Xuan; Chen, Mingwei; Ding, Li; Zhao, Shan; Wang, Yuefei; Kang, Qinjiong; Liu, Yi

    2011-01-01

    To analyze the distribution of coagulation parameters in patients with primary liver cancer; explore the relationship between clinical staging, survival, and coagulation parameters by using Coxproportional hazard model; and provide a parameter for clinical management and prognosis. Coagulation parameters were evaluated in 228 patients with primary liver cancer, 52 patients with common liver disease, and 52 normal healthy controls. The relationship between primary livercancer staging and coagulation parameters wasanalyzed. Follow-up examinations were performed. The Cox proportional hazard model was used to analyze the relationship between coagulationparameters and survival. The changes in the coagulation parameters in patients with primary liver cancer were significantly different from those in normal controls. The effect of the disease on coagulation function became more obvious as the severity of liver cancer increased (p<0.05). The levels of D-dimer, fibrinogen degradation products (FDP), fibrinogen (FIB), and platelets (PLT) were negatively correlated with the long-term survival of patients with advanced liver cancer. The stages of primary liver cancer are associated with coagulation parameters. Coagulation parameters are related to survival and risk factors. Monitoring of coagulation parameters may help ensure better surveillance and treatment for liver cancer patients.

  12. The Equilibrium Allele Frequency Distribution for a Population with Reproductive Skew

    PubMed Central

    Der, Ricky; Plotkin, Joshua B.

    2014-01-01

    We study the population genetics of two neutral alleles under reversible mutation in a model that features a skewed offspring distribution, called the Λ-Fleming–Viot process. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from this equilibrium distribution, but the form of the offspring distribution cannot itself always be so identified. We introduce an estimator for the mutation rate that is consistent, independent of the form of reproductive skew. We also introduce a two-allele infinite-sites version of the Λ-Fleming–Viot process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulas for the expected number of segregating sites as a function of sample size and offspring distribution. We find that the Wright–Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other Λ-processes. PMID:24473932

  13. Studies on thermokinetic of Chlorella pyrenoidosa devolatilization via different models.

    PubMed

    Chen, Zhihua; Lei, Jianshen; Li, Yunbei; Su, Xianfa; Hu, Zhiquan; Guo, Dabin

    2017-11-01

    The thermokinetics of Chlorella pyrenoidosa (CP) devolatilization were investigated based on iso-conversional model and different distributed activation energy models (DAEM). Iso-conversional process result showed that CP devolatilization roughly followed a single-step with mechanism function of f(α)=(1-α) 3 , and kinetic parameters pair of E 0 =180.5kJ/mol and A 0 =1.5E+13s -1 . Logistic distribution was the most suitable activation energy distribution function for CP devolatilization. Although reaction order n=3.3 was in accordance with iso-conversional process, Logistic DAEM could not detail the weight loss features since it presented as single-step reaction. The un-uniform feature of activation energy distribution in Miura-Maki DAEM, and weight fraction distribution in discrete DAEM reflected weight loss features. Due to the un-uniform distribution of activation and weight fraction, Miura-Maki DAEM and discreted DAEM could describe weight loss features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A reexamination of plasma measurements from the Mariner 5 Venus encounter

    NASA Technical Reports Server (NTRS)

    Shefer, R. E.; Lazarus, A. J.; Bridge, H. S.

    1979-01-01

    Mariner 5 plasma data from the Venus encounter have been analyzed with twice the time resolution of the original analysis of Bridge et al. (1967). The velocity distribution function for each spectrum is used to determine more precisely the locations of boundaries and characteristic flow parameters in the interaction region around the planet. A new region is identified in the flow located between magnetosheathlike plasma inside the shock front and an interior low-flux region near the geometrical shadow of the planet. The region is characterized by a wide velocity distribution function and a decrease in ion flux. Using the highest time resolution magnetic field data, it is proposed that rapid magnetic field fluctuations in this region may result in an artificial broadening of the distribution function. It is concluded that very high time resolution is required in future experiments in order to determine the true nature of the plasma in this region.

  15. Definition of a simple statistical parameter for the quantification of orientation in two dimensions: application to cells on grooves of nanometric depths.

    PubMed

    Davidson, P; Bigerelle, M; Bounichane, B; Giazzon, M; Anselme, K

    2010-07-01

    Contact guidance is generally evaluated by measuring the orientation angle of cells. However, statistical analyses are rarely performed on these parameters. Here we propose a statistical analysis based on a new parameter sigma, the orientation parameter, defined as the dispersion of the distribution of orientation angles. This parameter can be used to obtain a truncated Gaussian distribution that models the distribution of the data between -90 degrees and +90 degrees. We established a threshold value of the orientation parameter below which the data can be considered to be aligned within a 95% confidence interval. Applying our orientation parameter to cells on grooves and using a modelling approach, we established the relationship sigma=alpha(meas)+(52 degrees -alpha(meas))/(1+C(GDE)R) where the parameter C(GDE) represents the sensitivity of cells to groove depth, and R the groove depth. The values of C(GDE) obtained allowed us to compare the contact guidance of human osteoprogenitor (HOP) cells across experiments involving different groove depths, times in culture and inoculation densities. We demonstrate that HOP cells are able to identify and respond to the presence of grooves 30, 100, 200 and 500 nm deep and that the deeper the grooves, the higher the cell orientation. The evolution of the sensitivity (C(GDE)) with culture time is roughly sigmoidal with an asymptote, which is a function of inoculation density. The sigma parameter defined here is a universal parameter that can be applied to all orientation measurements and does not require a mathematical background or knowledge of directional statistics. Copyright 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  16. Space shuttle solid rocket booster recovery system definition. Volume 2: SRB water impact Monte Carlo computer program, user's manual

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.

  17. Attaining insight into interactions between hydrologic model parameters and geophysical attributes for national-scale model parameter estimation

    NASA Astrophysics Data System (ADS)

    Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.

    2017-12-01

    Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.

  18. A temperature-dependent coarse-grained model for the thermoresponsive polymer poly(N-isopropylacrylamide).

    PubMed

    Abbott, Lauren J; Stevens, Mark J

    2015-12-28

    A coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil-globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomistic simulations.

  19. Comparison of information theoretic divergences for sensor management

    NASA Astrophysics Data System (ADS)

    Yang, Chun; Kadar, Ivan; Blasch, Erik; Bakich, Michael

    2011-06-01

    In this paper, we compare the information-theoretic metrics of the Kullback-Leibler (K-L) and Renyi (α) divergence formulations for sensor management. Information-theoretic metrics have been well suited for sensor management as they afford comparisons between distributions resulting from different types of sensors under different actions. The difference in distributions can also be measured as entropy formulations to discern the communication channel capacity (i.e., Shannon limit). In this paper, we formulate a sensor management scenario for target tracking and compare various metrics for performance evaluation as a function of the design parameter (α) so as to determine which measures might be appropriate for sensor management given the dynamics of the scenario and design parameter.

  20. Light scattering by lunar-like particle size distributions

    NASA Technical Reports Server (NTRS)

    Goguen, Jay D.

    1991-01-01

    A fundamental input to models of light scattering from planetary regoliths is the mean phase function of the regolith particles. Using the known size distribution for typical lunar soils, the mean phase function and mean linear polarization for a regolith volume element of spherical particles of any composition were calculated from Mie theory. The two contour plots given here summarize the changes in the mean phase function and linear polarization with changes in the real part of the complex index of refraction, n - ik, for k equals 0.01, the visible wavelength 0.55 micrometers, and the particle size distribution of the typical mature lunar soil 72141. A second figure is a similar index-phase surface, except with k equals 0.1. The index-phase surfaces from this survey are a first order description of scattering by lunar-like regoliths of spherical particles of arbitrary composition. They form the basis of functions that span a large range of parameter-space.

  1. Two-component Jaffe models with a central black hole - I. The spherical case

    NASA Astrophysics Data System (ADS)

    Ciotti, Luca; Ziaee Lorzad, Azadeh

    2018-02-01

    Dynamical properties of spherically symmetric galaxy models where both the stellar and total mass density distributions are described by the Jaffe (1983) profile (with different scalelengths and masses) are presented. The orbital structure of the stellar component is described by Osipkov-Merritt anisotropy, and a black hole (BH) is added at the centre of the galaxy; the dark matter halo is isotropic. First, the conditions required to have a nowhere negative and monotonically decreasing dark matter halo density profile are derived. We then show that the phase-space distribution function can be recovered by using the Lambert-Euler W function, while in absence of the central BH only elementary functions appears in the integrand of the inversion formula. The minimum value of the anisotropy radius for consistency is derived in terms of the galaxy parameters. The Jeans equations for the stellar component are solved analytically, and the projected velocity dispersion at the centre and at large radii are also obtained analytically for generic values of the anisotropy radius. Finally, the relevant global quantities entering the Virial Theorem are computed analytically, and the fiducial anisotropy limit required to prevent the onset of Radial Orbit Instability is determined as a function of the galaxy parameters. The presented models, even though highly idealized, represent a substantial generalization of the models presented in Ciotti, and can be useful as starting point for more advanced modelling, the dynamics and the mass distribution of elliptical galaxies.

  2. Shapiro effect as a possible cause of the low-frequency pulsar timing noise in globular clusters

    NASA Astrophysics Data System (ADS)

    Larchenkova, T. I.; Kopeikin, S. M.

    2006-01-01

    A prolonged timing of millisecond pulsars has revealed low-frequency uncorrelated (infrared) noise, presumably of astrophysical origin, in the pulse arrival time (PAT) residuals for some of them. Currently available pulsar timing methods allow the statistical parameters of this noise to be reliably measured by decomposing the PAT residual function into orthogonal Fourier harmonics. In most cases, pulsars in globular clusters show a low-frequency modulation of their rotational phase and spin rate. The relativistic time delay of the pulsar signal in the curved spacetime of randomly distributed and moving globular cluster stars (the Shapiro effect) is suggested as a possible cause of this modulation. Extremely important (from an astrophysical point of view) information about the structure of the globular cluster core, which is inaccessible to study by other observational methods, could be obtained by analyzing the spectral parameters of the low-frequency noise caused by the Shapiro effect and attributable to the random passages of stars near the line of sight to the pulsar. Given the smallness of the aberration corrections that arise from the nonstationarity of the gravitational field of the randomly distributed ensemble of stars under consideration, a formula is derived for the Shapiro effect for a pulsar in a globular cluster. The derived formula is used to calculate the autocorrelation function of the low-frequency pulsar noise, the slope of its power spectrum, and the behavior of the σz statistic that characterizes the spectral properties of this noise in the form of a time function. The Shapiro effect under discussion is shown to manifest itself for large impact parameters as a low-frequency noise of the pulsar spin rate with a spectral index of n = -1.8 that depends weakly on the specific model distribution of stars in the globular cluster. For small impact parameters, the spectral index of the noise is n = -1.5.

  3. The application of the pilot points in groundwater numerical inversion model

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Teng, Yanguo; Cheng, Lirong

    2015-04-01

    Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity

  4. Coupling-parameter expansion in thermodynamic perturbation theory.

    PubMed

    Ramana, A Sai Venkata; Menon, S V G

    2013-02-01

    An approach to the coupling-parameter expansion in the liquid state theory of simple fluids is presented by combining the ideas of thermodynamic perturbation theory and integral equation theories. This hybrid scheme avoids the problems of the latter in the two phase region. A method to compute the perturbation series to any arbitrary order is developed and applied to square well fluids. Apart from the Helmholtz free energy, the method also gives the radial distribution function and the direct correlation function of the perturbed system. The theory is applied for square well fluids of variable ranges and compared with simulation data. While the convergence of perturbation series and the overall performance of the theory is good, improvements are needed for potentials with shorter ranges. Possible directions for further developments in the coupling-parameter expansion are indicated.

  5. Development and application of a probability distribution retrieval scheme to the remote sensing of clouds and precipitation

    NASA Astrophysics Data System (ADS)

    McKague, Darren Shawn

    2001-12-01

    The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)

  6. Linear functional minimization for inverse modeling

    DOE PAGES

    Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...

    2015-06-01

    In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less

  7. Methods for using a biometric parameter in the identification of persons

    DOEpatents

    Hively, Lee M [Philadelphia, TN

    2011-11-22

    Brain waves are used as a biometric parameter to provide for authentication and identification of personnel. The brain waves are sampled using EEG equipment and are processed using phase-space distribution functions to compare digital signature data from enrollment of authorized individuals to data taken from a test subject to determine if the data from the test subject matches the signature data to a degree to support positive identification.

  8. Photoionization of rare gas clusters

    NASA Astrophysics Data System (ADS)

    Zhang, Huaizhen

    This thesis concentrates on the study of photoionization of van der Waals clusters with different cluster sizes. The goal of the experimental investigation is to understand the electronic structure of van der Waals clusters and the electronic dynamics. These studies are fundamental to understand the interaction between UV-X rays and clusters. The experiments were performed at the Advanced Light Source at Lawrence Berkeley National Laboratory. The experimental method employs angle-resolved time-of-flight photoelectron spectrometry, one of the most powerful methods for probing the electronic structure of atoms, molecules, clusters and solids. The van der Waals cluster photoionization studies are focused on probing the evolution of the photoelectron angular distribution parameter as a function of photon energy and cluster size. The angular distribution has been known to be a sensitive probe of the electronic structure in atoms and molecules. However, it has not been used in the case of van der Waals clusters. We carried out outer-valence levels, inner-valence levels and core-levels cluster photoionization experiments. Specifically, this work reports on the first quantitative measurements of the angular distribution parameters of rare gas clusters as a function of average cluster sizes. Our findings for xenon clusters is that the overall photon-energy-dependent behavior of the photoelectrons from the clusters is very similar to that of the corresponding free atoms. However, distinct differences in the angular distribution point at cluster-size-dependent effects were found. For krypton clusters, in the photon energy range where atomic photoelectrons have a high angular anisotropy, our measurements show considerably more isotropic angular distributions for the cluster photoelectrons, especially right above the 3d and 4p thresholds. For the valence electrons, a surprising difference between the two spin-orbit components was found. For argon clusters, we found that the angular distribution parameter values of the two-spin-orbit components from Ar 2p clusters are slightly different. When comparing the beta values for Ar between atoms and clusters, we found different results between Ar 3s atoms and clusters, and between Ar 3p atoms and clusters. Argon cluster resonance from surface and bulk were also measured. Furthermore, the angular distribution parameters of Ar cluster photoelectrons and Ar atom photoelectrons in the 3s → np ionization region were obtained.

  9. UK audit of analysis of quantitative parameters from renography data generated using a physical phantom.

    PubMed

    Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V

    2014-07-01

    In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.

  10. Double stratified radiative Jeffery magneto nanofluid flow along an inclined stretched cylinder with chemical reaction and slip condition

    NASA Astrophysics Data System (ADS)

    Ramzan, M.; Gul, Hina; Dong Chung, Jae

    2017-11-01

    A mathematical model is designed to deliberate the flow of an MHD Jeffery nanofluid past a vertically inclined stretched cylinder near a stagnation point. The flow analysis is performed in attendance of thermal radiation, mixed convection and chemical reaction. Influence of thermal and solutal stratification with slip boundary condition is also considered. Apposite transformations are engaged to convert the nonlinear partial differential equations to differential equations with high nonlinearity. Convergent series solutions of the problem are established via the renowned Homotopy Analysis Method (HAM). Graphical illustrations are plotted to depict the effects of prominent arising parameters against all involved distributions. Numerically erected tables of important physical parameters like Skin friction, Nusselt and Sherwood numbers are also give. Comparative studies (with a previously examined work) are also included to endorse our results. It is noticed that the thermal stratification parameter has diminishing effect on temperature distribution. Moreover, the velocity field is a snowballing and declining function of curvature and slip parameters respectively.

  11. Nonstandard Analysis and Shock Wave Jump Conditions in a One-Dimensional Compressible Gas

    NASA Technical Reports Server (NTRS)

    Baty, Roy S.; Farassat, Fereidoun; Hargreaves, John

    2007-01-01

    Nonstandard analysis is a relatively new area of mathematics in which infinitesimal numbers can be defined and manipulated rigorously like real numbers. This report presents a fairly comprehensive tutorial on nonstandard analysis for physicists and engineers with many examples applicable to generalized functions. To demonstrate the power of the subject, the problem of shock wave jump conditions is studied for a one-dimensional compressible gas. It is assumed that the shock thickness occurs on an infinitesimal interval and the jump functions in the thermodynamic and fluid dynamic parameters occur smoothly across this interval. To use conservations laws, smooth pre-distributions of the Dirac delta measure are applied whose supports are contained within the shock thickness. Furthermore, smooth pre-distributions of the Heaviside function are applied which vary from zero to one across the shock wave. It is shown that if the equations of motion are expressed in nonconservative form then the relationships between the jump functions for the flow parameters may be found unambiguously. The analysis yields the classical Rankine-Hugoniot jump conditions for an inviscid shock wave. Moreover, non-monotonic entropy jump conditions are obtained for both inviscid and viscous flows. The report shows that products of generalized functions may be defined consistently using nonstandard analysis; however, physically meaningful products of generalized functions must be determined from the physics of the problem and not the mathematical form of the governing equations.

  12. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  13. 78 FR 27365 - Gulf of Mexico Fishery Management Council; Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-10

    ..., Special Mackerel and Ecosystem Scientific and Statistical Committees (SSC). DATES: The meeting will... certain parameters necessary to produce the probability distribution functions (PDFs) needed to determine... Scientific and Statistical Committees for discussion, in accordance with the Magnuson-Stevens Fishery...

  14. Existence of Optimal Controls for Compressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Doboszczak, Stefan; Mohan, Manil T.; Sritharan, Sivaguru S.

    2018-03-01

    We formulate a control problem for a distributed parameter system where the state is governed by the compressible Navier-Stokes equations. Introducing a suitable cost functional, the existence of an optimal control is established within the framework of strong solutions in three dimensions.

  15. On q-non-extensive statistics with non-Tsallisian entropy

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Korbel, Jan

    2016-02-01

    We combine an axiomatics of Rényi with the q-deformed version of Khinchin axioms to obtain a measure of information (i.e., entropy) which accounts both for systems with embedded self-similarity and non-extensivity. We show that the entropy thus obtained is uniquely solved in terms of a one-parameter family of information measures. The ensuing maximal-entropy distribution is phrased in terms of a special function known as the Lambert W-function. We analyze the corresponding "high" and "low-temperature" asymptotics and reveal a non-trivial structure of the parameter space. Salient issues such as concavity and Schur concavity of the new entropy are also discussed.

  16. Elliptic flow in small systems due to elliptic gluon distributions?

    DOE PAGES

    Hagiwara, Yoshikazu; Hatta, Yoshitaka; Xiao, Bo-Wen; ...

    2017-05-31

    We investigate the contributions from the so-called elliptic gluon Wigner distributions to the rapidity and azimuthal correlations of particles produced in high energy pp and pA collisions by applying the double parton scattering mechanism. We compute the ‘elliptic flow’ parameter v 2 as a function of the transverse momentum and rapidity, and find qualitative agreement with experimental observations. This shall encourage further developments with more rigorous studies of the elliptic gluon distributions and their applications in hard scattering processes in pp and pA collisions.

  17. Elliptic flow in small systems due to elliptic gluon distributions?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagiwara, Yoshikazu; Hatta, Yoshitaka; Xiao, Bo-Wen

    We investigate the contributions from the so-called elliptic gluon Wigner distributions to the rapidity and azimuthal correlations of particles produced in high energy pp and pA collisions by applying the double parton scattering mechanism. We compute the ‘elliptic flow’ parameter v 2 as a function of the transverse momentum and rapidity, and find qualitative agreement with experimental observations. This shall encourage further developments with more rigorous studies of the elliptic gluon distributions and their applications in hard scattering processes in pp and pA collisions.

  18. On the determination of the depth of EAS development maximum using the lateral distribution of Cerenkov light at distances 150 m from EAS axis

    NASA Technical Reports Server (NTRS)

    Aliev, N.; Alimov, T.; Kakhkharov, M.; Makhmudov, B. M.; Rakhimova, N.; Tashpulatov, R.; Kalmykov, N. N.; Khristiansen, G. B.; Prosin, V. V.

    1985-01-01

    The Samarkand extensive air showers (EAS) array was used to measure the mean and individual lateral distribution functions (LDF) of EAS Cerenkov light. The analysis of the individual parameters b showed that the mean depth of EAS maximum and the variance of the depth distribution of maxima of EAS with energies of approx. 2x10 to the 15th power eV can properly be described in terms of Kaidalov-Martirosyan quark-gluon string model (QGSM).

  19. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  20. Global exponential stability of bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Song, Qiankun; Cao, Jinde

    2007-05-01

    A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.

  1. Effects of dust polarity and nonextensive electrons on the dust-ion acoustic solitons and double layers in earth atmosphere

    NASA Astrophysics Data System (ADS)

    Ghobakhloo, Marzieh; Zomorrodian, Mohammad Ebrahim; Javidan, Kurosh

    2018-05-01

    Propagation of dustion acoustic solitary waves (DIASWs) and double layers is discussed in earth atmosphere, using the Sagdeev potential method. The best model for distribution function of electrons in earth atmosphere is found by fitting available data on different distribution functions. The nonextensive function with parameter q = 0.58 provides the best fit on observations. Thus we analyze the propagation of localized waves in an unmagnetized plasma containing nonextensive electrons, inertial ions, and negatively/positively charged stationary dust. It is found that both compressive and rarefactive solitons as well as double layers exist depending on the sign (and the value) of dust polarity. Characters of propagated waves are described using the presented model.

  2. Banding of NMR-derived Methyl Order Parameters: Implications for Protein Dynamics

    PubMed Central

    Sharp, Kim A.; Kasinath, Vignesh; Wand, A. Joshua

    2014-01-01

    Our understanding of protein folding, stability and function has begun to more explicitly incorporate dynamical aspects. Nuclear magnetic resonance has emerged as a powerful experimental method for obtaining comprehensive site-resolved insight into protein motion. It has been observed that methyl-group motion tends to cluster into three “classes” when expressed in terms of the popular Lipari-Szabo model-free squared generalized order parameter. Here the origins of the three classes or bands in the distribution of order parameters are examined. As a first step, a Bayesian based approach, which makes no a priori assumption about the existence or number of bands, is developed to detect the banding of O2axis values derived either from NMR experiments or molecular dynamics simulations. The analysis is applied to seven proteins with extensive molecular dynamics simulations of these proteins in explicit water to examine the relationship between O2 and fine details of the motion of methyl bearing side chains. All of the proteins studied display banding, with some subtle differences. We propose a very simple yet plausible physical mechanism for banding. Finally, our Bayesian method is used to analyze the measured distributions of methyl group motions in the catabolite activating protein and several of its mutants in various liganded states and discuss the functional implications of the observed banding to protein dynamics and function. PMID:24677353

  3. Parasitism alters three power laws of scaling in a metazoan community: Taylor’s law, density-mass allometry, and variance-mass allometry

    PubMed Central

    Lagrue, Clément; Poulin, Robert; Cohen, Joel E.

    2015-01-01

    How do the lifestyles (free-living unparasitized, free-living parasitized, and parasitic) of animal species affect major ecological power-law relationships? We investigated this question in metazoan communities in lakes of Otago, New Zealand. In 13,752 samples comprising 1,037,058 organisms, we found that species of different lifestyles differed in taxonomic distribution and body mass and were well described by three power laws: a spatial Taylor’s law (the spatial variance in population density was a power-law function of the spatial mean population density); density-mass allometry (the spatial mean population density was a power-law function of mean body mass); and variance-mass allometry (the spatial variance in population density was a power-law function of mean body mass). To our knowledge, this constitutes the first empirical confirmation of variance-mass allometry for any animal community. We found that the parameter values of all three relationships differed for species with different lifestyles in the same communities. Taylor's law and density-mass allometry accurately predicted the form and parameter values of variance-mass allometry. We conclude that species of different lifestyles in these metazoan communities obeyed the same major ecological power-law relationships but did so with parameters specific to each lifestyle, probably reflecting differences among lifestyles in population dynamics and spatial distribution. PMID:25550506

  4. Parasitism alters three power laws of scaling in a metazoan community: Taylor's law, density-mass allometry, and variance-mass allometry.

    PubMed

    Lagrue, Clément; Poulin, Robert; Cohen, Joel E

    2015-02-10

    How do the lifestyles (free-living unparasitized, free-living parasitized, and parasitic) of animal species affect major ecological power-law relationships? We investigated this question in metazoan communities in lakes of Otago, New Zealand. In 13,752 samples comprising 1,037,058 organisms, we found that species of different lifestyles differed in taxonomic distribution and body mass and were well described by three power laws: a spatial Taylor's law (the spatial variance in population density was a power-law function of the spatial mean population density); density-mass allometry (the spatial mean population density was a power-law function of mean body mass); and variance-mass allometry (the spatial variance in population density was a power-law function of mean body mass). To our knowledge, this constitutes the first empirical confirmation of variance-mass allometry for any animal community. We found that the parameter values of all three relationships differed for species with different lifestyles in the same communities. Taylor's law and density-mass allometry accurately predicted the form and parameter values of variance-mass allometry. We conclude that species of different lifestyles in these metazoan communities obeyed the same major ecological power-law relationships but did so with parameters specific to each lifestyle, probably reflecting differences among lifestyles in population dynamics and spatial distribution.

  5. Density-functional theory based on the electron distribution on the energy coordinate

    NASA Astrophysics Data System (ADS)

    Takahashi, Hideaki

    2018-03-01

    We developed an electronic density functional theory utilizing a novel electron distribution n(ɛ) as a basic variable to compute ground state energy of a system. n(ɛ) is obtained by projecting the electron density n({\\boldsymbol{r}}) defined on the space coordinate {\\boldsymbol{r}} onto the energy coordinate ɛ specified with the external potential {\\upsilon }ext}({\\boldsymbol{r}}) of interest. It was demonstrated that the Kohn-Sham equation can also be formulated with the exchange-correlation functional E xc[n(ɛ)] that employs the density n(ɛ) as an argument. It turned out an exchange functional proposed in our preliminary development suffices to describe properly the potential energies of several types of chemical bonds with comparable accuracies to the corresponding functional based on local density approximation. As a remarkable feature of the distribution n(ɛ) it inherently involves the spatially non-local information of the exchange hole at the bond dissociation limit in contrast to conventional approximate functionals. By taking advantage of this property we also developed a prototype of the static correlation functional E sc including no empirical parameters, which showed marked improvements in describing the dissociations of covalent bonds in {{{H}}}2,{{{C}}}2{{{H}}}4 and {CH}}4 molecules.

  6. Nonextensive Entropy Approach to Space Plasma Fluctuations and Turbulence

    NASA Astrophysics Data System (ADS)

    Leubner, M. P.; Vörös, Z.; Baumjohann, W.

    Spatial intermittency in fully developed turbulence is an established feature of astrophysical plasma fluctuations and in particular apparent in the interplanetary medium by in situ observations. In this situation, the classical Boltzmann— Gibbs extensive thermo-statistics, applicable when microscopic interactions and memory are short ranged and the environment is a continuous and differentiable manifold, fails. Upon generalization of the entropy function to nonextensivity, accounting for long-range interactions and thus for correlations in the system, it is demonstrated that the corresponding probability distribution functions (PDFs) are members of a family of specific power-law distributions. In particular, the resulting theoretical bi-κ functional reproduces accurately the observed global leptokurtic, non-Gaussian shape of the increment PDFs of characteristic solar wind variables on all scales, where nonlocality in turbulence is controlled via a multiscale coupling parameter. Gradual decoupling is obtained by enhancing the spatial separation scale corresponding to increasing κ-values in case of slow solar wind conditions where a Gaussian is approached in the limit of large scales. Contrary, the scaling properties in the high speed solar wind are predominantly governed by the mean energy or variance of the distribution, appearing as second parameter in the theory. The PDFs of solar wind scalar field differences are computed from WIND and ACE data for different time-lags and bulk speeds and analyzed within the nonextensive theory, where also a particular nonlinear dependence of the coupling parameter and variance with scale arises for best fitting theoretical PDFs. Consequently, nonlocality in fluctuations, related to both, turbulence and its large scale driving, should be related to long-range interactions in the context of nonextensive entropy generalization, providing fundamentally the physical background of the observed scale dependence of fluctuations in intermittent space plasmas.

  7. Leader-follower formation control of underactuated surface vehicles based on sliding mode control and parameter estimation.

    PubMed

    Sun, Zhijian; Zhang, Guoqing; Lu, Yu; Zhang, Weidong

    2018-01-01

    This paper studies the leader-follower formation control of underactuated surface vehicles with model uncertainties and environmental disturbances. A parameter estimation and upper bound estimation based sliding mode control scheme is proposed to solve the problem of the unknown plant parameters and environmental disturbances. For each of these leader-follower formation systems, the dynamic equations of position and attitude are analyzed using coordinate transformation with the aid of the backstepping technique. All the variables are guaranteed to be uniformly ultimately bounded stable in the closed-loop system, which is proven by the distribution design Lyapunov function synthesis. The main advantages of this approach are that: first, parameter estimation based sliding mode control can enhance the robustness of the closed-loop system in presence of model uncertainties and environmental disturbances; second, a continuous function is developed to replace the signum function in the design of sliding mode scheme, which devotes to reduce the chattering of the control system. Finally, numerical simulations are given to demonstrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Biological data assimilation for parameter estimation of a phytoplankton functional type model for the western North Pacific

    NASA Astrophysics Data System (ADS)

    Hoshiba, Yasuhiro; Hirata, Takafumi; Shigemitsu, Masahito; Nakano, Hideyuki; Hashioka, Taketo; Masuda, Yoshio; Yamanaka, Yasuhiro

    2018-06-01

    Ecosystem models are used to understand ecosystem dynamics and ocean biogeochemical cycles and require optimum physiological parameters to best represent biological behaviours. These physiological parameters are often tuned up empirically, while ecosystem models have evolved to increase the number of physiological parameters. We developed a three-dimensional (3-D) lower-trophic-level marine ecosystem model known as the Nitrogen, Silicon and Iron regulated Marine Ecosystem Model (NSI-MEM) and employed biological data assimilation using a micro-genetic algorithm to estimate 23 physiological parameters for two phytoplankton functional types in the western North Pacific. The estimation of the parameters was based on a one-dimensional simulation that referenced satellite data for constraining the physiological parameters. The 3-D NSI-MEM optimized by the data assimilation improved the timing of a modelled plankton bloom in the subarctic and subtropical regions compared to the model without data assimilation. Furthermore, the model was able to improve not only surface concentrations of phytoplankton but also their subsurface maximum concentrations. Our results showed that surface data assimilation of physiological parameters from two contrasting observatory stations benefits the representation of vertical plankton distribution in the western North Pacific.

  9. Hydromagnetic Waves and Instabilities in Kappa Distribution Plasma

    DTIC Science & Technology

    2009-01-01

    perpendicular effective particle temperatures, respec- tively. Two other parameters related to pM and pnl which naturally occur in the study of...role in determin- ing the excitation conditions of the field swelling and mirror instabilities [see Eqs. (60) and (65)]. Calculating pnl /pni from Eq...more convenient form of the perturbed distribution function /„ that may be used in- stead of Eq. (12) to obtain nn, pM, and pnl given by Eqs. (72

  10. Electron-hole pairs generation rate estimation irradiated by isotope Nickel-63 in silicone using GEANT4

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Sidorov, V. G.; Zelenkov, P. V.; Khoroshko, A. Y.; Lelekov, A. T.

    2015-10-01

    To optimize parameters of beta-electrical converter of isotope Nickel-63 radiation, model of the distribution of EHP generation rate in semiconductor must be derived. By using Monte-Carlo methods in GEANT4 system with ultra-low energy electron physics models this distribution in silicon calculated and approximated with Gauss function. Maximal efficient isotope layer thickness and maximal energy efficiency of EHP generation were estimated.

  11. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  12. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  13. Estimation of soil saturated hydraulic conductivity by artificial neural networks ensemble in smectitic soils

    NASA Astrophysics Data System (ADS)

    Sedaghat, A.; Bayat, H.; Safari Sinegani, A. A.

    2016-03-01

    The saturated hydraulic conductivity ( K s ) of the soil is one of the main soil physical properties. Indirect estimation of this parameter using pedo-transfer functions (PTFs) has received considerable attention. The Purpose of this study was to improve the estimation of K s using fractal parameters of particle and micro-aggregate size distributions in smectitic soils. In this study 260 disturbed and undisturbed soil samples were collected from Guilan province, the north of Iran. The fractal model of Bird and Perrier was used to compute the fractal parameters of particle and micro-aggregate size distributions. The PTFs were developed by artificial neural networks (ANNs) ensemble to estimate K s by using available soil data and fractal parameters. There were found significant correlations between K s and fractal parameters of particles and microaggregates. Estimation of K s was improved significantly by using fractal parameters of soil micro-aggregates as predictors. But using geometric mean and geometric standard deviation of particles diameter did not improve K s estimations significantly. Using fractal parameters of particles and micro-aggregates simultaneously, had the most effect in the estimation of K s . Generally, fractal parameters can be successfully used as input parameters to improve the estimation of K s in the PTFs in smectitic soils. As a result, ANNs ensemble successfully correlated the fractal parameters of particles and micro-aggregates to K s .

  14. Realistic sampling of anisotropic correlogram parameters for conditional simulation of daily rainfields

    NASA Astrophysics Data System (ADS)

    Gyasi-Agyei, Yeboah

    2018-01-01

    This paper has established a link between the spatial structure of radar rainfall, which more robustly describes the spatial structure, and gauge rainfall for improved daily rainfield simulation conditioned on the limited gauged data for regions with or without radar records. A two-dimensional anisotropic exponential function that has parameters of major and minor axes lengths, and direction, is used to describe the correlogram (spatial structure) of daily rainfall in the Gaussian domain. The link is a copula-based joint distribution of the radar-derived correlogram parameters that uses the gauge-derived correlogram parameters and maximum daily temperature as covariates of the Box-Cox power exponential margins and Gumbel copula. While the gauge-derived, radar-derived and the copula-derived correlogram parameters reproduced the mean estimates similarly using leave-one-out cross-validation of ordinary kriging, the gauge-derived parameters yielded higher standard deviation (SD) of the Gaussian quantile which reflects uncertainty in over 90% of cases. However, the distribution of the SD generated by the radar-derived and the copula-derived parameters could not be distinguished. For the validation case, the percentage of cases of higher SD by the gauge-derived parameter sets decreased to 81.2% and 86.6% for the non-calibration and the calibration periods, respectively. It has been observed that 1% reduction in the Gaussian quantile SD can cause over 39% reduction in the SD of the median rainfall estimate, actual reduction being dependent on the distribution of rainfall of the day. Hence the main advantage of using the most correct radar correlogram parameters is to reduce the uncertainty associated with conditional simulations that rely on SD through kriging.

  15. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  16. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantry, Sonny; Petriello, Frank

    We derive a factorization theorem for the Higgs boson transverse momentum (p{sub T}) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m{sub h}>>p{sub T}>>{Lambda}{sub QCD}, where m{sub h} denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p{sub T} scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p{sub T}-scale physics simplifies themore » implementation of higher order radiative corrections in {alpha}{sub s}(p{sub T}). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p{sub T}/m{sub h} and {Lambda}{sub QCD}/p{sub T} can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p{sub T} resummation.« less

  17. Engineering Parameters in Bioreactor's Design: A Critical Aspect in Tissue Engineering

    PubMed Central

    Amoabediny, Ghassem; Pouran, Behdad; Tabesh, Hadi; Shokrgozar, Mohammad Ali; Haghighipour, Nooshin; Khatibi, Nahid; Mottaghy, Khosrow; Zandieh-Doulabi, Behrouz

    2013-01-01

    Bioreactors are important inevitable part of any tissue engineering (TE) strategy as they aid the construction of three-dimensional functional tissues. Since the ultimate aim of a bioreactor is to create a biological product, the engineering parameters, for example, internal and external mass transfer, fluid velocity, shear stress, electrical current distribution, and so forth, are worth to be thoroughly investigated. The effects of such engineering parameters on biological cultures have been addressed in only a few preceding studies. Furthermore, it would be highly inefficient to determine the optimal engineering parameters by trial and error method. A solution is provided by emerging modeling and computational tools and by analyzing oxygen, carbon dioxide, and nutrient and metabolism waste material transports, which can simulate and predict the experimental results. Discovering the optimal engineering parameters is crucial not only to reduce the cost and time of experiments, but also to enhance efficacy and functionality of the tissue construct. This review intends to provide an inclusive package of the engineering parameters together with their calculation procedure in addition to the modeling techniques in TE bioreactors. PMID:24000327

  18. Engineering parameters in bioreactor's design: a critical aspect in tissue engineering.

    PubMed

    Salehi-Nik, Nasim; Amoabediny, Ghassem; Pouran, Behdad; Tabesh, Hadi; Shokrgozar, Mohammad Ali; Haghighipour, Nooshin; Khatibi, Nahid; Anisi, Fatemeh; Mottaghy, Khosrow; Zandieh-Doulabi, Behrouz

    2013-01-01

    Bioreactors are important inevitable part of any tissue engineering (TE) strategy as they aid the construction of three-dimensional functional tissues. Since the ultimate aim of a bioreactor is to create a biological product, the engineering parameters, for example, internal and external mass transfer, fluid velocity, shear stress, electrical current distribution, and so forth, are worth to be thoroughly investigated. The effects of such engineering parameters on biological cultures have been addressed in only a few preceding studies. Furthermore, it would be highly inefficient to determine the optimal engineering parameters by trial and error method. A solution is provided by emerging modeling and computational tools and by analyzing oxygen, carbon dioxide, and nutrient and metabolism waste material transports, which can simulate and predict the experimental results. Discovering the optimal engineering parameters is crucial not only to reduce the cost and time of experiments, but also to enhance efficacy and functionality of the tissue construct. This review intends to provide an inclusive package of the engineering parameters together with their calculation procedure in addition to the modeling techniques in TE bioreactors.

  19. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  20. Lateral distribution of the radio signal in extensive air showers measured with LOPES

    NASA Astrophysics Data System (ADS)

    Apel, W. D.; Arteaga, J. C.; Asch, T.; Badea, A. F.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Brüggemann, M.; Buchholz, P.; Buitink, S.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Finger, M.; Fuhrmann, D.; Gemmeke, H.; Ghia, P. L.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Horneffer, A.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kickelbick, D.; Krömer, O.; Kuijpers, J.; Lafebre, S.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Mitrica, B.; Morello, C.; Navarra, G.; Nehls, S.; Nigl, A.; Oehlschläger, J.; Over, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schröder, F.; Sima, O.; Singh, K.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.; Zensus, J. A.; LOPES Collaboration

    2010-01-01

    The antenna array LOPES is set up at the location of the KASCADE-Grande extensive air shower experiment in Karlsruhe, Germany and aims to measure and investigate radio pulses from extensive air showers. The coincident measurements allow us to reconstruct the electric field strength at observation level in dependence of general EAS parameters. In the present work, the lateral distribution of the radio signal in air showers is studied in detail. It is found that the lateral distributions of the electric field strengths in individual EAS can be described by an exponential function. For about 20% of the events a flattening towards the shower axis is observed, preferentially for showers with large inclination angle. The estimated scale parameters R0, describing the slope of the lateral profiles range between 100 and 200 m. No evidence for a direct correlation of R0 with shower parameters like azimuth angle, geomagnetic angle, or primary energy can be found. This indicates that the lateral profile is an intrinsic property of the radio emission during the shower development which makes the radio detection technique suitable for large scale applications.

  1. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.

  2. Spacecraft observations of a Maxwell Demon coating the separatrix of asymmetric magnetic reconnection with crescent-shaped electron distributions

    NASA Astrophysics Data System (ADS)

    Egedal, J.; Le, A.; Daughton, W.; Wetherton, B.; Cassak, Pa; Chen, Lj; Lavraud, B.; Dorell, J.; Avanov, L.; Gershman, D.

    2016-10-01

    During asymmetric magnetic reconnection in the dayside magnetopause in situ spacecraft mea- surements show that electrons from the high density inflow penetrate some distance into the low density inflow. Supported by a kinetic simulation, we present a general derivation of an exclusion energy parameter, which provides a lower kinetic energy bound for an electron to jump across the reconnection region from one inflow region to the other. As by a Maxwell Demon, only high energy electrons are permitted to cross the inner reconnection region, strongly impacting the form of the electron distribution function observed along the low density side separatrix. The dynamics produce two distinct flavors of crescent-shaped electron distributions in a thin boundary layer along the separatrix between the magnetospheric inflow and the reconnection exhaust. The analytical model presented relates these salient details of the distribution function to the electron dynamics in the inner reconnection region.

  3. Statistical thermodynamics of a two-dimensional relativistic gas.

    PubMed

    Montakhab, Afshin; Ghodrat, Malihe; Barati, Mahmood

    2009-03-01

    In this paper we study a fully relativistic model of a two-dimensional hard-disk gas. This model avoids the general problems associated with relativistic particle collisions and is therefore an ideal system to study relativistic effects in statistical thermodynamics. We study this model using molecular-dynamics simulation, concentrating on the velocity distribution functions. We obtain results for x and y components of velocity in the rest frame (Gamma) as well as the moving frame (Gamma;{'}) . Our results confirm that Jüttner distribution is the correct generalization of Maxwell-Boltzmann distribution. We obtain the same "temperature" parameter beta for both frames consistent with a recent study of a limited one-dimensional model. We also address the controversial topic of temperature transformation. We show that while local thermal equilibrium holds in the moving frame, relying on statistical methods such as distribution functions or equipartition theorem are ultimately inconclusive in deciding on a correct temperature transformation law (if any).

  4. What are the Shapes of Response Time Distributions in Visual Search?

    PubMed Central

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905

  5. Uncertainties in Galactic Chemical Evolution Models

    DOE PAGES

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.; ...

    2016-06-15

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  7. Magnetohydrodynamic Flow by a Stretching Cylinder with Newtonian Heating and Homogeneous-Heterogeneous Reactions

    PubMed Central

    Hayat, T.; Hussain, Zakir; Alsaedi, A.; Farooq, M.

    2016-01-01

    This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ—perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears. PMID:27280883

  8. Magnetohydrodynamic Flow by a Stretching Cylinder with Newtonian Heating and Homogeneous-Heterogeneous Reactions.

    PubMed

    Hayat, T; Hussain, Zakir; Alsaedi, A; Farooq, M

    2016-01-01

    This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ-perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears.

  9. Temperature based Restricted Boltzmann Machines

    NASA Astrophysics Data System (ADS)

    Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping

    2016-01-01

    Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.

  10. ATTITUDE FILTERING ON SO(3)

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2005-01-01

    A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.

  11. QCD-inspired spectra from Blue's functions

    NASA Astrophysics Data System (ADS)

    Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail

    1996-02-01

    We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.

  12. Use of multi-functional flexible micro-sensors for in situ measurement of temperature, voltage and fuel flow in a proton exchange membrane fuel cell.

    PubMed

    Lee, Chi-Yuan; Chan, Pin-Cheng; Lee, Chung-Ju

    2010-01-01

    Temperature, voltage and fuel flow distribution all contribute considerably to fuel cell performance. Conventional methods cannot accurately determine parameter changes inside a fuel cell. This investigation developed flexible and multi-functional micro sensors on a 40 μm-thick stainless steel foil substrate by using micro-electro-mechanical systems (MEMS) and embedded them in a proton exchange membrane fuel cell (PEMFC) to measure the temperature, voltage and flow. Users can monitor and control in situ the temperature, voltage and fuel flow distribution in the cell. Thereby, both fuel cell performance and lifetime can be increased.

  13. The social welfare function and individual responsibility: some theoretical issues and empirical evidence.

    PubMed

    Dolan, Paul; Tsuchiya, Aki

    2009-01-01

    The literature on income distribution has attempted to evaluate different degrees of inequality using a social welfare function (SWF) approach. However, it has largely ignored the source of such inequalities, and has thus failed to consider different degrees of inequity. The literature on egalitarianism has addressed issues of equity, largely in relation to individual responsibility. This paper builds upon these two literatures, and introduces individual responsibility into the SWF. Results from a small-scale study of people's preferences in relation to the distribution of health benefits are presented to illustrate how the parameter values of a SWF might be determined.

  14. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  15. Fusion cross sections for reactions involving medium and heavy nucleus-nucleus systems

    NASA Astrophysics Data System (ADS)

    Atta, Debasis; Basu, D. N.

    2014-12-01

    Existing data on near-barrier fusion excitation functions of medium and heavy nucleus-nucleus systems have been analyzed by using a simple diffused-barrier formula derived assuming the Gaussian shape of the barrier-height distributions. The fusion cross section is obtained by folding the Gaussian barrier distribution with the classical expression for the fusion cross section for a fixed barrier. The energy dependence of the fusion cross section, thus obtained, provides good description to the existing data on near-barrier fusion and capture excitation functions for medium and heavy nucleus-nucleus systems. The theoretical values for the parameters of the barrier distribution are estimated which can be used for fusion or capture cross-section predictions that are especially important for planning experiments for synthesizing new superheavy elements.

  16. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  17. Intermittency via moments and distributions in central O+Cu collisions at 14. 6 A[center dot]GeV/c

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tannenbaum, M.J.

    Fluctuations in pseudorapidity distributions of charged particles from central (ZCAL) collisions of [sup 16]O+Cu at 14.6 A[center dot]GeV/c have been analyzed by Ju Kang using the method of scaled factorial moments as a function of the interval [delta][eta] an apparent power-law growth of moments with decreasing interval is observed down to [delta][eta] [approximately] 0.1, and the measured slope parameters are found to obey two scaling rules. Previous experience with E[sub T] distributions suggested that fluctuations of multiplicity and transverse energy can be well described by Gamma or Negative Binomial Distributions (NBD) and excellent fits to NBD were obtained in allmore » [delta][eta] bins. The k parameter of the NBD fit was found to increase linearly with the [delta][eta] interval, which due to the well known property of the NBD under convolution, indicates that the multiplicity distributions in adjacent bins of pseudorapidity [delta][eta] [approximately] 0.1 are largely statistically independent.« less

  18. Intermittency via moments and distributions in central O+Cu collisions at 14.6 A{center_dot}GeV/c

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tannenbaum, M.J.; The E802 Collaboration

    Fluctuations in pseudorapidity distributions of charged particles from central (ZCAL) collisions of {sup 16}O+Cu at 14.6 A{center_dot}GeV/c have been analyzed by Ju Kang using the method of scaled factorial moments as a function of the interval {delta}{eta} an apparent power-law growth of moments with decreasing interval is observed down to {delta}{eta} {approximately} 0.1, and the measured slope parameters are found to obey two scaling rules. Previous experience with E{sub T} distributions suggested that fluctuations of multiplicity and transverse energy can be well described by Gamma or Negative Binomial Distributions (NBD) and excellent fits to NBD were obtained in all {delta}{eta}more » bins. The k parameter of the NBD fit was found to increase linearly with the {delta}{eta} interval, which due to the well known property of the NBD under convolution, indicates that the multiplicity distributions in adjacent bins of pseudorapidity {delta}{eta} {approximately} 0.1 are largely statistically independent.« less

  19. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  20. Heterogeneity-induced large deviations in activity and (in some cases) entropy production

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.

    2014-10-01

    We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.

Top