NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V
2014-11-30
We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1978-01-01
Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
A Hermite-based lattice Boltzmann model with artificial viscosity for compressible viscous flows
NASA Astrophysics Data System (ADS)
Qiu, Ruofan; Chen, Rongqian; Zhu, Chenxiang; You, Yancheng
2018-05-01
A lattice Boltzmann model on Hermite basis for compressible viscous flows is presented in this paper. The model is developed in the framework of double-distribution-function approach, which has adjustable specific-heat ratio and Prandtl number. It contains a density distribution function for the flow field and a total energy distribution function for the temperature field. The equilibrium distribution function is determined by Hermite expansion, and the D3Q27 and D3Q39 three-dimensional (3D) discrete velocity models are used, in which the discrete velocity model can be replaced easily. Moreover, an artificial viscosity is introduced to enhance the model for capturing shock waves. The model is tested through several cases of compressible flows, including 3D supersonic viscous flows with boundary layer. The effect of artificial viscosity is estimated. Besides, D3Q27 and D3Q39 models are further compared in the present platform.
The inclusion of capillary distribution in the adiabatic tissue homogeneity model of blood flow
NASA Astrophysics Data System (ADS)
Koh, T. S.; Zeman, V.; Darko, J.; Lee, T.-Y.; Milosevic, M. F.; Haider, M.; Warde, P.; Yeung, I. W. T.
2001-05-01
We have developed a non-invasive imaging tracer kinetic model for blood flow which takes into account the distribution of capillaries in tissue. Each individual capillary is assumed to follow the adiabatic tissue homogeneity model. The main strength of our new model is in its ability to quantify the functional distribution of capillaries by the standard deviation in the time taken by blood to pass through the tissue. We have applied our model to the human prostate and have tested two different types of distribution functions. Both distribution functions yielded very similar predictions for the various model parameters, and in particular for the standard deviation in transit time. Our motivation for developing this model is the fact that the capillary distribution in cancerous tissue is drastically different from in normal tissue. We believe that there is great potential for our model to be used as a prognostic tool in cancer treatment. For example, an accurate knowledge of the distribution in transit times might result in an accurate estimate of the degree of tumour hypoxia, which is crucial to the success of radiation therapy.
Models of violently relaxed galaxies
NASA Astrophysics Data System (ADS)
Merritt, David; Tremaine, Scott; Johnstone, Doug
1989-02-01
The properties of spherical self-gravitating models derived from two distribution functions that incorporate, in a crude way, the physics of violent relaxation are investigated. The first distribution function is identical to the one discussed by Stiavelli and Bertin (1985) except for a change in the sign of the 'temperature', i.e., e exp(-aE) to e exp(+aE). It is shown that these 'negative temperature' models provide a much better description of the end-state of violent relaxation than 'positive temperature' models. The second distribution function is similar to the first except for a different dependence on angular momentum. Both distribution functions yield single-parameter families of models with surface density profiles very similar to the R exp 1/4 law. Furthermore, the central concentration of models in both families increases monotonically with the velocity anisotropy, as expected in systems that formed through cold collapse.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2012-09-01
In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Ye, Xin; Pendyala, Ram M.; Zou, Yajie
2017-01-01
A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152
Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie
2017-01-01
A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.
A quark model analysis of the transversity distribution
NASA Astrophysics Data System (ADS)
Scopetta, Sergio; Vento, Vicente
1998-04-01
The feasibility of measuring chiral-odd parton distribution functions in polarized Drell-Yan and semi-inclusive experiments has renewed theoretical interest in their study. Models of hadron structure have proven successful in describing the gross features of the chiral-even structure functions. Similar expectations motivated our study of the transversity parton distributions in the Isgur-Karl and MIT bag models. We confirm, by performing a NLO calculation, the diverse low x behaviors of the transversity and spin structure functions at the experimental scale and show that it is fundamentally a consequence of the different behaviors under evolution of these functions. The inequalities of Soffer establish constraints between data and model calculations of the chiral-odd transversity function. The approximate compatibility of our model calculations with these constraints confers credibility to our estimates.
Derivation of Hunt equation for suspension distribution using Shannon entropy theory
NASA Astrophysics Data System (ADS)
Kundu, Snehasis
2017-12-01
In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.
Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers
2015-01-01
We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...
Nonparametric Bayesian inference for mean residual life functions in survival analysis.
Poynor, Valerie; Kottas, Athanasios
2018-01-19
Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S
2017-05-30
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Julie, Hongki; Pasaribu, Udjianna S.; Pancoro, Adi
2015-12-01
This paper will allow Markov Chain's application in genome shared identical by descent by two individual at full sibs model. The full sibs model was a continuous time Markov Chain with three state. In the full sibs model, we look for the cumulative distribution function of the number of sub segment which have 2 IBD haplotypes from a segment of the chromosome which the length is t Morgan and the cumulative distribution function of the number of sub segment which have at least 1 IBD haplotypes from a segment of the chromosome which the length is t Morgan. This cumulative distribution function will be developed by the moment generating function.
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik
2015-02-17
Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.
Secure and Resilient Functional Modeling for Navy Cyber-Physical Systems
2017-05-24
Functional Modeling Compiler (SCCT) FM Compiler and Key Performance Indicators (KPI) May 2018 Pending. Model Management Backbone (SCCT) MMB Demonstration...implement the agent- based distributed runtime. - KPIs for single/multicore controllers and temporal/spatial domains. - Integration of the model management ...Distributed Runtime (UCI) Not started. Model Management Backbone (SCCT) Not started. Siemens Corporation Corporate Technology Unrestricted
Modelling population distribution using remote sensing imagery and location-based data
NASA Astrophysics Data System (ADS)
Song, J.; Prishchepov, A. V.
2017-12-01
Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.
NASA Astrophysics Data System (ADS)
Ochoa, Diego Alejandro; García, Jose Eduardo
2016-04-01
The Preisach model is a classical method for describing nonlinear behavior in hysteretic systems. According to this model, a hysteretic system contains a collection of simple bistable units which are characterized by an internal field and a coercive field. This set of bistable units exhibits a statistical distribution that depends on these fields as parameters. Thus, nonlinear response depends on the specific distribution function associated with the material. This model is satisfactorily used in this work to describe the temperature-dependent ferroelectric response in PZT- and KNN-based piezoceramics. A distribution function expanded in Maclaurin series considering only the first terms in the internal field and the coercive field is proposed. Changes in coefficient relations of a single distribution function allow us to explain the complex temperature dependence of hard piezoceramic behavior. A similar analysis based on the same form of the distribution function shows that the KNL-NTS properties soften around its orthorhombic to tetragonal phase transition.
A generalized statistical model for the size distribution of wealth
NASA Astrophysics Data System (ADS)
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2012-12-01
In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.
ProbOnto: ontology and knowledge base of probability distributions.
Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala
2016-09-01
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Bivariate extreme value distributions
NASA Technical Reports Server (NTRS)
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
Naima: a Python package for inference of particle distribution properties from nonthermal spectra
NASA Astrophysics Data System (ADS)
Zabalza, V.
2015-07-01
The ultimate goal of the observation of nonthermal emission from astrophysical sources is to understand the underlying particle acceleration and evolution processes, and few tools are publicly available to infer the particle distribution properties from the observed photon spectra from X-ray to VHE gamma rays. Here I present naima, an open source Python package that provides models for nonthermal radiative emission from homogeneous distribution of relativistic electrons and protons. Contributions from synchrotron, inverse Compton, nonthermal bremsstrahlung, and neutral-pion decay can be computed for a series of functional shapes of the particle energy distributions, with the possibility of using user-defined particle distribution functions. In addition, naima provides a set of functions that allow to use these models to fit observed nonthermal spectra through an MCMC procedure, obtaining probability distribution functions for the particle distribution parameters. Here I present the models and methods available in naima and an example of their application to the understanding of a galactic nonthermal source. naima's documentation, including how to install the package, is available at http://naima.readthedocs.org.
NASA Astrophysics Data System (ADS)
Chu, Huaqiang; Liu, Fengshan; Consalvi, Jean-Louis
2014-08-01
The relationship between the spectral line based weighted-sum-of-gray-gases (SLW) model and the full-spectrum k-distribution (FSK) model in isothermal and homogeneous media is investigated in this paper. The SLW transfer equation can be derived from the FSK transfer equation expressed in the k-distribution function without approximation. It confirms that the SLW model is equivalent to the FSK model in the k-distribution function form. The numerical implementation of the SLW relies on a somewhat arbitrary discretization of the absorption cross section whereas the FSK model finds the spectrally integrated intensity by integration over the smoothly varying cumulative-k distribution function using a Gaussian quadrature scheme. The latter is therefore in general more efficient as a fewer number of gray gases is required to achieve a prescribed accuracy. Sample numerical calculations were conducted to demonstrate the different efficiency of these two methods. The FSK model is found more accurate than the SLW model in radiation transfer in H2O; however, the SLW model is more accurate in media containing CO2 as the only radiating gas due to its explicit treatment of ‘clear gas.’
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin
2018-04-20
We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.
NASA Astrophysics Data System (ADS)
Dunn, S. M.; Colohan, R. J. E.
1999-09-01
A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.
NASA Astrophysics Data System (ADS)
Colonna, G.; D'Ambrosio, D.; D'Ammando, G.; Pietanza, L. D.; Capitelli, M.
2014-12-01
A state-to-state model of H2/He plasmas coupling the master equations for internal distributions of heavy species with the transport equation for the free electrons has been used as a basis for implementing a multi-temperature kinetic model. In the multi-temperature model internal distributions of heavy particles are Boltzmann, the electron energy distribution function is Maxwell, and the rate coefficients of the elementary processes become a function of local temperatures associated to the relevant equilibrium distributions. The state-to-state and multi-temperature models have been compared in the case of a homogenous recombining plasma, reproducing the conditions met during supersonic expansion though converging-diverging nozzles.
A Model Based on Environmental Factors for Diameter Distribution in Black Wattle in Brazil
Sanquetta, Carlos Roberto; Behling, Alexandre; Dalla Corte, Ana Paula; Péllico Netto, Sylvio; Rodrigues, Aurelio Lourenço; Simon, Augusto Arlindo
2014-01-01
This article discusses the dynamics of a diameter distribution in stands of black wattle throughout its growth cycle using the Weibull probability density function. Moreover, the parameters of this distribution were related to environmental variables from meteorological data and surface soil horizon with the aim of finding a model for diameter distribution which their coefficients were related to the environmental variables. We found that the diameter distribution of the stand changes only slightly over time and that the estimators of the Weibull function are correlated with various environmental variables, with accumulated rainfall foremost among them. Thus, a model was obtained in which the estimators of the Weibull function are dependent on rainfall. Such a function can have important applications, such as in simulating growth potential in regions where historical growth data is lacking, as well as the behavior of the stand under different environmental conditions. The model can also be used to project growth in diameter, based on the rainfall affecting the forest over a certain time period. PMID:24932909
Shao, J Y; Shu, C; Huang, H B; Chew, Y T
2014-03-01
A free-energy-based phase-field lattice Boltzmann method is proposed in this work to simulate multiphase flows with density contrast. The present method is to improve the Zheng-Shu-Chew (ZSC) model [Zheng, Shu, and Chew, J. Comput. Phys. 218, 353 (2006)] for correct consideration of density contrast in the momentum equation. The original ZSC model uses the particle distribution function in the lattice Boltzmann equation (LBE) for the mean density and momentum, which cannot properly consider the effect of local density variation in the momentum equation. To correctly consider it, the particle distribution function in the LBE must be for the local density and momentum. However, when the LBE of such distribution function is solved, it will encounter a severe numerical instability. To overcome this difficulty, a transformation, which is similar to the one used in the Lee-Lin (LL) model [Lee and Lin, J. Comput. Phys. 206, 16 (2005)] is introduced in this work to change the particle distribution function for the local density and momentum into that for the mean density and momentum. As a result, the present model still uses the particle distribution function for the mean density and momentum, and in the meantime, considers the effect of local density variation in the LBE as a forcing term. Numerical examples demonstrate that both the present model and the LL model can correctly simulate multiphase flows with density contrast, and the present model has an obvious improvement over the ZSC model in terms of solution accuracy. In terms of computational time, the present model is less efficient than the ZSC model, but is much more efficient than the LL model.
Vanreusel, Wouter; Maes, Dirk; Van Dyck, Hans
2007-02-01
Numerous models for predicting species distribution have been developed for conservation purposes. Most of them make use of environmental data (e.g., climate, topography, land use) at a coarse grid resolution (often kilometres). Such approaches are useful for conservation policy issues including reserve-network selection. The efficiency of predictive models for species distribution is usually tested on the area for which they were developed. Although highly interesting from the point of view of conservation efficiency, transferability of such models to independent areas is still under debate. We tested the transferability of habitat-based predictive distribution models for two regionally threatened butterflies, the green hairstreak (Callophrys rubi) and the grayling (Hipparchia semele), within and among three nature reserves in northeastern Belgium. We built predictive models based on spatially detailed maps of area-wide distribution and density of ecological resources. We used resources directly related to ecological functions (host plants, nectar sources, shelter, microclimate) rather than environmental surrogate variables. We obtained models that performed well with few resource variables. All models were transferable--although to different degrees--among the independent areas within the same broad geographical region. We argue that habitat models based on essential functional resources could transfer better in space than models that use indirect environmental variables. Because functional variables can easily be interpreted and even be directly affected by terrain managers, these models can be useful tools to guide species-adapted reserve management.
Bernstein-Greene-Kruskal theory of electron holes in superthermal space plasma
NASA Astrophysics Data System (ADS)
Aravindakshan, Harikrishnan; Kakad, Amar; Kakad, Bharati
2018-05-01
Several spacecraft missions have observed electron holes (EHs) in Earth's and other planetary magnetospheres. These EHs are modeled with the stationary solutions of Vlasov-Poisson equations, obtained by adopting the Bernstein-Greene-Kruskal (BGK) approach. Through the literature survey, we find that the BGK EHs are modelled by using either thermal distribution function or any statistical distribution derived from particular spacecraft observations. However, Maxwell distributions are quite rare in space plasmas; instead, most of these plasmas are superthermal in nature and generally described by kappa distribution. We have developed a one-dimensional BGK model of EHs for space plasma that follows superthermal kappa distribution. The analytical solution of trapped electron distribution function for such plasmas is derived. The trapped particle distribution function in plasma following kappa distribution is found to be steeper and denser as compared to that for Maxwellian distribution. The width-amplitude relation of perturbation for superthermal plasma is derived and allowed regions of stable BGK solutions are obtained. We find that the stable BGK solutions are better supported by superthermal plasmas compared to that of thermal plasmas for small amplitude perturbations.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Probability distribution functions for intermittent scrape-off layer plasma fluctuations
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-03-01
A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.
NASA Astrophysics Data System (ADS)
Syaina, L. P.; Majidi, M. A.
2018-04-01
Single impurity Anderson model describes a system consisting of non-interacting conduction electrons coupled with a localized orbital having strongly interacting electrons at a particular site. This model has been proven successful to explain the phenomenon of metal-insulator transition through Anderson localization. Despite the well-understood behaviors of the model, little has been explored theoretically on how the model properties gradually evolve as functions of hybridization parameter, interaction energy, impurity concentration, and temperature. Here, we propose to do a theoretical study on those aspects of a single impurity Anderson model using the distributional exact diagonalization method. We solve the model Hamiltonian by randomly generating sampling distribution of some conducting electron energy levels with various number of occupying electrons. The resulting eigenvalues and eigenstates are then used to define the local single-particle Green function for each sampled electron energy distribution using Lehmann representation. Later, we extract the corresponding self-energy of each distribution, then average over all the distributions and construct the local Green function of the system to calculate the density of states. We repeat this procedure for various values of those controllable parameters, and discuss our results in connection with the criteria of the occurrence of metal-insulator transition in this system.
A generalization of the power law distribution with nonlinear exponent
NASA Astrophysics Data System (ADS)
Prieto, Faustino; Sarabia, José María
2017-01-01
The power law distribution is usually used to fit data in the upper tail of the distribution. However, commonly it is not valid to model data in all the range. In this paper, we present a new family of distributions, the so-called Generalized Power Law (GPL), which can be useful for modeling data in all the range and possess power law tails. To do that, we model the exponent of the power law using a non-linear function which depends on data and two parameters. Then, we provide some basic properties and some specific models of that new family of distributions. After that, we study a relevant model of the family, with special emphasis on the quantile and hazard functions, and the corresponding estimation and testing methods. Finally, as an empirical evidence, we study how the debt is distributed across municipalities in Spain. We check that power law model is only valid in the upper tail; we show analytically and graphically the competence of the new model with municipal debt data in the whole range; and we compare the new distribution with other well-known distributions including the Lognormal, the Generalized Pareto, the Fisk, the Burr type XII and the Dagum models.
A test of the cross-scale resilience model: Functional richness in Mediterranean-climate ecosystems
Wardwell, D.A.; Allen, Craig R.; Peterson, G.D.; Tyre, A.J.
2008-01-01
Ecological resilience has been proposed to be generated, in part, in the discontinuous structure of complex systems. Environmental discontinuities are reflected in discontinuous, aggregated animal body mass distributions. Diversity of functional groups within body mass aggregations (scales) and redundancy of functional groups across body mass aggregations (scales) has been proposed to increase resilience. We evaluate that proposition by analyzing mammalian and avian communities of Mediterranean-climate ecosystems. We first determined that body mass distributions for each animal community were discontinuous. We then calculated the variance in richness of function across aggregations in each community, and compared observed values with distributions created by 1000 simulations using a null of random distribution of function, with the same n, number of discontinuities and number of functional groups as the observed data. Variance in the richness of functional groups across scales was significantly lower in real communities than in simulations in eight of nine sites. The distribution of function across body mass aggregations in the animal communities we analyzed was non-random, and supports the contentions of the cross-scale resilience model. ?? 2007 Elsevier B.V. All rights reserved.
Voltage stress effects on microcircuit accelerated life test failure rates
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1976-01-01
The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.
Final state interactions and inclusive nuclear collisions
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Dubey, Rajendra R.
1993-01-01
A scattering formalism is developed in a multiple scattering model to describe inclusive momentum distributions for high-energy projectiles. The effects of final state interactions on response functions and momentum distributions are investigated. Calculations for high-energy protons that include shell model response functions are compared with experiments.
Nishino, Ko; Lombardi, Stephen
2011-01-01
We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.
Dominant role of many-body effects on the carrier distribution function of quantum dot lasers
NASA Astrophysics Data System (ADS)
Peyvast, Negin; Zhou, Kejia; Hogg, Richard A.; Childs, David T. D.
2016-03-01
The effects of free-carrier-induced shift and broadening on the carrier distribution function are studied considering different extreme cases for carrier statistics (Fermi-Dirac and random carrier distributions) as well as quantum dot (QD) ensemble inhomogeneity and state separation using a Monte Carlo model. Using this model, we show that the dominant factor determining the carrier distribution function is the free carrier effects and not the choice of carrier statistics. By using empirical values of the free-carrier-induced shift and broadening, good agreement is obtained with experimental data of QD materials obtained under electrical injection for both extreme cases of carrier statistics.
NASA Technical Reports Server (NTRS)
Gurgiolo, Chris; Vinas, Adolfo F.
2009-01-01
This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.
NASA Technical Reports Server (NTRS)
Freilich, Michael H.; Dunbar, R. Scott
1993-01-01
Calculation of accurate vector winds from scatterometers requires knowledge of the relationship between backscatter cross-section and the geophysical variable of interest. As the detailed dynamics of wind generation of centimetric waves and radar-sea surface scattering at moderate incidence angles are not well known, empirical scatterometer model functions relating backscatter to winds must be developed. Less well appreciated is the fact that, given an accurate model function and some knowledge of the dominant scattering mechanisms, significant information on the amplitudes and directional distributions of centimetric roughness elements on the sea surface can be inferred. accurate scatterometer model functions can thus be used to investigate wind generation of short waves under realistic conditions. The present investigation involves developing an empirical model function for the C-band (5.3 GHz) ERS-1 scatterometer and comparing Ku-band model functions with the C-band model to infer information on the two-dimensional spectrum of centimetric roughness elements in the ocean. The C-band model function development is based on collocations of global backscatter measurements with operational surface analyses produced by meteorological agencies. Strengths and limitations of the method are discussed, and the resulting model function is validated in part through comparison with the actual distributions of backscatter cross-section triplets. Details of the directional modulation as well as the wind speed sensitivity at C-band are investigated. Analysis of persistent outliers in the data is used to infer the magnitudes of non-wind effects (such as atmospheric stratification, swell, etc.). The ERS-1 C-band instrument and the Seasat Ku-band (14.6 GHz) scatterometer both imaged waves of approximately 3.4 cm wavelength assuming that Bragg scattering is the dominant mechanism. Comparisons of the C-band and Ku-band model functions are used both to test the validity of the postulated Bragg mechanism and to investigate the directional distribution of the imaged waves under a variety of conditions where Bragg scatter is dominant.
Grid Integrated Distributed PV (GridPV) Version 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reno, Matthew J.; Coogan, Kyle
2014-12-01
This manual provides the documentation of the MATLAB toolbox of functions for using OpenDSS to simulate the impact of solar energy on the distribution system. The majority of the functio ns are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in th e OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included tomore » show potential uses of the toolbox functions. Each function i n the toolbox is documented with the function use syntax, full description, function input list, function output list, example use, and example output.« less
Beyond the excised ensemble: modelling elliptic curve L-functions with random matrices
NASA Astrophysics Data System (ADS)
Cooper, I. A.; Morris, Patrick W.; Snaith, N. C.
2016-02-01
The ‘excised ensemble’, a random matrix model for the zeros of quadratic twist families of elliptic curve L-functions, was introduced by Dueñez et al (2012 J. Phys. A: Math. Theor. 45 115207) The excised model is motivated by a formula for central values of these L-functions in a paper by Kohnen and Zagier (1981 Invent. Math. 64 175-98). This formula indicates that for a finite set of L-functions from a family of quadratic twists, the central values are all either zero or are greater than some positive cutoff. The excised model imposes this same condition on the central values of characteristic polynomials of matrices from {SO}(2N). Strangely, the cutoff on the characteristic polynomials that results in a convincing model for the L-function zeros is significantly smaller than that which we would obtain by naively transferring Kohnen and Zagier’s cutoff to the {SO}(2N) ensemble. In this current paper we investigate a modification to the excised model. It lacks the simplicity of the original excised ensemble, but it serves to explain the reason for the unexpectedly low cutoff in the original excised model. Additionally, the distribution of central L-values is ‘choppier’ than the distribution of characteristic polynomials, in the sense that it is a superposition of a series of peaks: the characteristic polynomial distribution is a smooth approximation to this. The excised model did not attempt to incorporate these successive peaks, only the initial cutoff. Here we experiment with including some of the structure of the L-value distribution. The conclusion is that a critical feature of a good model is to associate the correct mass to the first peak of the L-value distribution.
NASA Astrophysics Data System (ADS)
Cianciara, Aleksander
2016-09-01
The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.
About normal distribution on SO(3) group in texture analysis
NASA Astrophysics Data System (ADS)
Savyolova, T. I.; Filatov, S. V.
2017-12-01
This article studies and compares different normal distributions (NDs) on SO(3) group, which are used in texture analysis. Those NDs are: Fisher normal distribution (FND), Bunge normal distribution (BND), central normal distribution (CND) and wrapped normal distribution (WND). All of the previously mentioned NDs are central functions on SO(3) group. CND is a subcase for normal CLT-motivated distributions on SO(3) (CLT here is Parthasarathy’s central limit theorem). WND is motivated by CLT in R 3 and mapped to SO(3) group. A Monte Carlo method for modeling normally distributed values was studied for both CND and WND. All of the NDs mentioned above are used for modeling different components of crystallites orientation distribution function in texture analysis.
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
NASA Astrophysics Data System (ADS)
Khajehei, S.; Madadgar, S.; Moradkhani, H.
2014-12-01
The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).
A computer model of molecular arrangement in a n-paraffinic liquid
NASA Astrophysics Data System (ADS)
Vacatello, Michele; Avitabile, Gustavo; Corradini, Paolo; Tuzi, Angela
1980-07-01
A computer model of a bulk liquid polymer was built to investigate the problem of local order. The model is made of C30 n-alkane molecules; it is not a lattice model, but it allows for a continuous variability of torsion angles and interchain distances, subject to realistic intra- and intermolecular potentials. Experimental x-ray scattering curves and radial distribution functions are well reproduced. Calculated properties like end-to-end distances, distribution of torsion angles, radial distribution functions, and chain direction correlation parameters, all indicate a random coil conformation and no tendency to form bundles of parallel chains.
The beta Burr type X distribution properties with application.
Merovci, Faton; Khaleel, Mundher Abdullah; Ibrahim, Noor Akma; Shitan, Mahendran
2016-01-01
We develop a new continuous distribution called the beta-Burr type X distribution that extends the Burr type X distribution. The properties provide a comprehensive mathematical treatment of this distribution. Further more, various structural properties of the new distribution are derived, that includes moment generating function and the rth moment thus generalizing some results in the literature. We also obtain expressions for the density, moment generating function and rth moment of the order statistics. We consider the maximum likelihood estimation to estimate the parameters. Additionally, the asymptotic confidence intervals for the parameters are derived from the Fisher information matrix. Finally, simulation study is carried at under varying sample size to assess the performance of this model. Illustration the real dataset indicates that this new distribution can serve as a good alternative model to model positive real data in many areas.
Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...
2013-01-01
A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less
Kaon quark distribution functions in the chiral constituent quark model
NASA Astrophysics Data System (ADS)
Watanabe, Akira; Sawada, Takahiro; Kao, Chung Wen
2018-04-01
We investigate the valence u and s ¯ quark distribution functions of the K+ meson, vK (u )(x ,Q2) and vK (s ¯)(x ,Q2), in the framework of the chiral constituent quark model. We judiciously choose the bare distributions at the initial scale to generate the dressed distributions at the higher scale, considering the meson cloud effects and the QCD evolution, which agree with the phenomenologically satisfactory valence quark distribution of the pion and the experimental data of the ratio vK (u )(x ,Q2)/vπ (u )(x ,Q2) . We show how the meson cloud effects affect the bare distribution functions in detail. We find that a smaller S U (3 ) flavor symmetry breaking effect is observed, compared with results of the preceding studies based on other approaches.
Analytic modeling of aerosol size distributions
NASA Technical Reports Server (NTRS)
Deepack, A.; Box, G. P.
1979-01-01
Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1979-01-01
The evolution of the two-point correlation function for the large-scale distribution of galaxies in an expanding universe is studied on the assumption that the perturbation densities lie in a Gaussian distribution centered on any given mass scale. The perturbations are evolved according to the Friedmann equation, and the correlation function for the resulting distribution of perturbations at the present epoch is calculated. It is found that: (1) the computed correlation function gives a satisfactory fit to the observed function in cosmological models with a density parameter (Omega) of approximately unity, provided that a certain free parameter is suitably adjusted; (2) the power-law slope in the nonlinear regime reflects the initial fluctuation spectrum, provided that the density profile of individual perturbations declines more rapidly than the -2.4 power of distance; and (3) both positive and negative contributions to the correlation function are predicted for cosmological models with Omega less than unity.
Differential memory in the earth's magnetotail
NASA Technical Reports Server (NTRS)
Burkhart, G. R.; Chen, J.
1991-01-01
The process of 'differential memory' in the earth's magnetotail is studied in the framework of the modified Harris magnetotail geometry. It is verified that differential memory can generate non-Maxwellian features in the modified Harris field model. The time scales and the potentially observable distribution functions associated with the process of differential memory are investigated, and it is shown that non-Maxwelllian distributions can evolve as a test particle response to distribution function boundary conditions in a Harris field magnetotail model. The non-Maxwellian features which arise from distribution function mapping have definite time scales associated with them, which are generally shorter than the earthward convection time scale but longer than the typical Alfven crossing time.
Hoppe, Fred M
2008-06-01
We show that the formula of Faà di Bruno for the derivative of a composite function gives, in special cases, the sampling distributions in population genetics that are due to Ewens and to Pitman. The composite function is the same in each case. Other sampling distributions also arise in this way, such as those arising from Dirichlet, multivariate hypergeometric, and multinomial models, special cases of which correspond to Bose-Einstein, Fermi-Dirac, and Maxwell-Boltzmann distributions in physics. Connections are made to compound sampling models.
Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers
NASA Astrophysics Data System (ADS)
Samiei-Esfahany, Sami; Hanssen, Ramon F.
2012-01-01
The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broderick, Robert; Quiroz, Jimmy; Grijalva, Santiago
2014-07-15
Matlab Toolbox for simulating the impact of solar energy on the distribution grid. The majority of the functions are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving GridPV Toolbox information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in the OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included to show potential uses of the toolbox functions.
Confronting species distribution model predictions with species functional traits.
Wittmann, Marion E; Barnes, Matthew A; Jerde, Christopher L; Jones, Lisa A; Lodge, David M
2016-02-01
Species distribution models are valuable tools in studies of biogeography, ecology, and climate change and have been used to inform conservation and ecosystem management. However, species distribution models typically incorporate only climatic variables and species presence data. Model development or validation rarely considers functional components of species traits or other types of biological data. We implemented a species distribution model (Maxent) to predict global climate habitat suitability for Grass Carp (Ctenopharyngodon idella). We then tested the relationship between the degree of climate habitat suitability predicted by Maxent and the individual growth rates of both wild (N = 17) and stocked (N = 51) Grass Carp populations using correlation analysis. The Grass Carp Maxent model accurately reflected the global occurrence data (AUC = 0.904). Observations of Grass Carp growth rate covered six continents and ranged from 0.19 to 20.1 g day(-1). Species distribution model predictions were correlated (r = 0.5, 95% CI (0.03, 0.79)) with observed growth rates for wild Grass Carp populations but were not correlated (r = -0.26, 95% CI (-0.5, 0.012)) with stocked populations. Further, a review of the literature indicates that the few studies for other species that have previously assessed the relationship between the degree of predicted climate habitat suitability and species functional traits have also discovered significant relationships. Thus, species distribution models may provide inferences beyond just where a species may occur, providing a useful tool to understand the linkage between species distributions and underlying biological mechanisms.
NASA Astrophysics Data System (ADS)
Fisher, Karl B.
1995-08-01
The relation between the galaxy correlation functions in real-space and redshift-space is derived in the linear regime by an appropriate averaging of the joint probability distribution of density and velocity. The derivation recovers the familiar linear theory result on large scales but has the advantage of clearly revealing the dependence of the redshift distortions on the underlying peculiar velocity field; streaming motions give rise to distortions of θ(Ω0.6/b) while variations in the anisotropic velocity dispersion yield terms of order θ(Ω1.2/b2). This probabilistic derivation of the redshift-space correlation function is similar in spirit to the derivation of the commonly used "streaming" model, in which the distortions are given by a convolution of the real-space correlation function with a velocity distribution function. The streaming model is often used to model the redshift-space correlation function on small, highly nonlinear, scales. There have been claims in the literature, however, that the streaming model is not valid in the linear regime. Our analysis confirms this claim, but we show that the streaming model can be made consistent with linear theory provided that the model for the streaming has the functional form predicted by linear theory and that the velocity distribution is chosen to be a Gaussian with the correct linear theory dispersion.
Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing
2016-01-01
Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108
Liu, Hong; Zhu, Jingping; Wang, Kai
2015-08-24
The geometrical attenuation model given by Blinn was widely used in the geometrical optics bidirectional reflectance distribution function (BRDF) models. Blinn's geometrical attenuation model based on symmetrical V-groove assumption and ray scalar theory causes obvious inaccuracies in BRDF curves and negatives the effects of polarization. Aiming at these questions, a modified polarized geometrical attenuation model based on random surface microfacet theory is presented by combining of masking and shadowing effects and polarized effect. The p-polarized, s-polarized and unpolarized geometrical attenuation functions are given in their separate expressions and are validated with experimental data of two samples. It shows that the modified polarized geometrical attenuation function reaches better physical rationality, improves the precision of BRDF model, and widens the applications for different polarization.
NASA Technical Reports Server (NTRS)
Considine, David B.; Douglass, Anne R.
1994-01-01
A parameterization of NAT (nitric acid trihydrate) clouds is developed for use in 2D models of the stratosphere. The parameterization uses model distributions of HNO3 and H2O to determine critical temperatures for NAT formation as a function of latitude and pressure. National Meteorological Center temperature fields are then used to determine monthly temperature frequency distributions, also as a function of latitude and pressure. The fractions of these distributions which fall below the critical temperatures for NAT formation are then used to determine the NAT cloud surface area density for each location in the model grid. By specifying heterogeneous reaction rates as functions of the surface area density, it is then possible to assess the effects of the NAT clouds on model constituent distributions. We also consider the increase in the NAT cloud formation in the presence of a fleet of stratospheric aircraft. The stratospheric aircraft NO(x) and H2O perturbations result in increased HNO3 as well as H2O. This increases the probability of NAT formation substantially, especially if it is assumed that the aircraft perturbations are confined to a corridor region.
NASA Astrophysics Data System (ADS)
Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang
2016-07-01
Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.
High Resolution Electro-Optical Aerosol Phase Function Database PFNDAT2006
2006-08-01
snow models use the gamma distribution (equation 12) with m = 0. 3.4.1 Rain Model The most widely used analytical parameterization for raindrop size ...Uijlenhoet and Stricker (22), as the result of an analytical derivation based on a theoretical parameterization for the raindrop size distribution ...6 2.2 Particle Size Distribution Models
Sherlock, M.; Brodrick, J. P.; Ridgers, C. P.
2017-08-08
Here, we compare the reduced non-local electron transport model developed to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a one-dimensional hohlraum ablation problem. We find that the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced modelmore » reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region.« less
Dependence of Microlensing on Source Size and Lens Mass
NASA Astrophysics Data System (ADS)
Congdon, A. B.; Keeton, C. R.
2007-11-01
In gravitational lensed quasars, the magnification of an image depends on the configuration of stars in the lensing galaxy. We study the statistics of the magnification distribution for random star fields. The width of the distribution characterizes the amount by which the observed magnification is likely to differ from models in which the mass is smoothly distributed. We use numerical simulations to explore how the width of the magnification distribution depends on the mass function of stars, and on the size of the source quasar. We then propose a semi-analytic model to describe the distribution width for different source sizes and stellar mass functions.
NASA Astrophysics Data System (ADS)
Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.
2016-12-01
Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.
Random walk to a nonergodic equilibrium concept
NASA Astrophysics Data System (ADS)
Bel, G.; Barkai, E.
2006-01-01
Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.
NASA Astrophysics Data System (ADS)
Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping
2012-09-01
Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.
Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng
2013-05-01
Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.
Alves, Daniele S. M.; El Hedri, Sonia; Wacker, Jay G.
2016-03-21
We discuss the relevance of directional detection experiments in the post-discovery era and propose a method to extract the local dark matter phase space distribution from directional data. The first feature of this method is a parameterization of the dark matter distribution function in terms of integrals of motion, which can be analytically extended to infer properties of the global distribution if certain equilibrium conditions hold. The second feature of our method is a decomposition of the distribution function in moments of a model independent basis, with minimal reliance on the ansatz for its functional form. We illustrate our methodmore » using the Via Lactea II N-body simulation as well as an analytical model for the dark matter halo. Furthermore, we conclude that O(1000) events are necessary to measure deviations from the Standard Halo Model and constrain or measure the presence of anisotropies.« less
Thermomechanical Fractional Model of TEMHD Rotational Flow
Hamza, F.; Abd El-Latief, A.; Khatan, W.
2017-01-01
In this work, the fractional mathematical model of an unsteady rotational flow of Xanthan gum (XG) between two cylinders in the presence of a transverse magnetic field has been studied. This model consists of two fractional parameters α and β representing thermomechanical effects. The Laplace transform is used to obtain the numerical solutions. The fractional parameter influence has been discussed graphically for the functions field distribution (temperature, velocity, stress and electric current distributions). The relationship between the rotation of both cylinders and the fractional parameters has been discussed on the functions field distribution for small and large values of time. PMID:28045941
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherlock, M.; Brodrick, J. P.; Ridgers, C. P.
Here, we compare the reduced non-local electron transport model developed to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a one-dimensional hohlraum ablation problem. We find that the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced modelmore » reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region.« less
NASA Astrophysics Data System (ADS)
Andreev, Pavel A.
2017-02-01
The dielectric permeability tensor for spin polarized plasmas is derived in terms of the spin-1/2 quantum kinetic model in six-dimensional phase space. Expressions for the distribution function and spin distribution function are derived in linear approximations on the path of dielectric permeability tensor derivation. The dielectric permeability tensor is derived for the spin-polarized degenerate electron gas. It is also discussed at the finite temperature regime, where the equilibrium distribution function is presented by the spin-polarized Fermi-Dirac distribution. Consideration of the spin-polarized equilibrium states opens possibilities for the kinetic modeling of the thermal spin current contribution in the plasma dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le, Hai P.; Cambier, Jean -Luc
Here, we present a numerical model and a set of conservative algorithms for Non-Maxwellian plasma kinetics with inelastic collisions. These algorithms self-consistently solve for the time evolution of an isotropic electron energy distribution function interacting with an atomic state distribution function of an arbitrary number of levels through collisional excitation, deexcitation, as well as ionization and recombination. Electron-electron collisions, responsible for thermalization of the electron distribution, are also included in the model. The proposed algorithms guarantee mass/charge and energy conservation in a single step, and is applied to the case of non-uniform gridding of the energy axis in the phasemore » space of the electron distribution function. Numerical test cases are shown to demonstrate the accuracy of the method and its conservation properties.« less
Johnson, Timothy R; Kuhn, Kristine M
2015-12-01
This paper introduces the ltbayes package for R. This package includes a suite of functions for investigating the posterior distribution of latent traits of item response models. These include functions for simulating realizations from the posterior distribution, profiling the posterior density or likelihood function, calculation of posterior modes or means, Fisher information functions and observed information, and profile likelihood confidence intervals. Inferences can be based on individual response patterns or sets of response patterns such as sum scores. Functions are included for several common binary and polytomous item response models, but the package can also be used with user-specified models. This paper introduces some background and motivation for the package, and includes several detailed examples of its use.
Working Memory and Decision-Making in a Frontoparietal Circuit Model
2017-01-01
Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models. SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental “building blocks” of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal cortex (PPC) at the core. It is not clear, however, what the contributions of PPC and PFC are in light of the computations that subserve working memory and decision-making. We constructed a biophysical model of a reciprocally connected frontoparietal circuit that revealed shared and distinct functions for the PFC and PPC across working memory and decision-making tasks. Our parsimonious model connects circuit-level properties to cognitive functions and suggests novel design principles beyond those of local circuits for cognitive processing in multiregional brain networks. PMID:29114071
Working Memory and Decision-Making in a Frontoparietal Circuit Model.
Murray, John D; Jaramillo, Jorge; Wang, Xiao-Jing
2017-12-13
Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models. SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental "building blocks" of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal cortex (PPC) at the core. It is not clear, however, what the contributions of PPC and PFC are in light of the computations that subserve working memory and decision-making. We constructed a biophysical model of a reciprocally connected frontoparietal circuit that revealed shared and distinct functions for the PFC and PPC across working memory and decision-making tasks. Our parsimonious model connects circuit-level properties to cognitive functions and suggests novel design principles beyond those of local circuits for cognitive processing in multiregional brain networks. Copyright © 2017 the authors 0270-6474/17/3712167-20$15.00/0.
Bivariate sub-Gaussian model for stock index returns
NASA Astrophysics Data System (ADS)
Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka
2017-11-01
Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
NASA Astrophysics Data System (ADS)
Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.
2017-12-01
Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Li, Q; He, Y L; Wang, Y; Tao, W Q
2007-11-01
A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.
Momentum distributions for H 2 ( e , e ' p )
Ford, William P.; Jeschonnek, Sabine; Van Orden, J. W.
2014-12-29
[Background] A primary goal of deuteron electrodisintegration is the possibility of extracting the deuteron momentum distribution. This extraction is inherently fraught with difficulty, as the momentum distribution is not an observable and the extraction relies on theoretical models dependent on other models as input. [Purpose] We present a new method for extracting the momentum distribution which takes into account a wide variety of model inputs thus providing a theoretical uncertainty due to the various model constituents. [Method] The calculations presented here are using a Bethe-Salpeter like formalism with a wide variety of bound state wave functions, form factors, and finalmore » state interactions. We present a method to extract the momentum distributions from experimental cross sections, which takes into account the theoretical uncertainty from the various model constituents entering the calculation. [Results] In order to test the extraction pseudo-data was generated, and the extracted "experimental'' distribution, which has theoretical uncertainty from the various model inputs, was compared with the theoretical distribution used to generate the pseudo-data. [Conclusions] In the examples we compared the original distribution was typically within the error band of the extracted distribution. The input wave functions do contain some outliers which are discussed in the text, but at least this process can provide an upper bound on the deuteron momentum distribution. Due to the reliance on the theoretical calculation to obtain this quantity any extraction method should account for the theoretical error inherent in these calculations due to model inputs.« less
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
A descriptive model of resting-state networks using Markov chains.
Xie, H; Pal, R; Mitra, S
2016-08-01
Resting-state functional connectivity (RSFC) studies considering pairwise linear correlations have attracted great interests while the underlying functional network structure still remains poorly understood. To further our understanding of RSFC, this paper presents an analysis of the resting-state networks (RSNs) based on the steady-state distributions and provides a novel angle to investigate the RSFC of multiple functional nodes. This paper evaluates the consistency of two networks based on the Hellinger distance between the steady-state distributions of the inferred Markov chain models. The results show that generated steady-state distributions of default mode network have higher consistency across subjects than random nodes from various RSNs.
How Bright is the Proton? A Precise Determination of the Photon Parton Distribution Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manohar, Aneesh; Nason, Paolo; Salam, Gavin P.
2016-12-09
It has become apparent in recent years that it is important, notably for a range of physics studies at the Large Hadron Collider, to have accurate knowledge on the distribution of photons in the proton. We show how the photon parton distribution function (PDF) can be determined in a model-independent manner, using electron-proton (ep) scattering data, in effect viewing the ep → e + X process as an electron scattering off the photon field of the proton. To this end, we consider an imaginary, beyond the Standard Model process with a flavor changing photon-lepton vertex. We write its cross sectionmore » in two ways: one in terms of proton structure functions, the other in terms of a photon distribution. Requiring their equivalence yields the photon distribution as an integral over proton structure functions. As a result of the good precision of ep data, we constrain the photon PDF at the level of 1%–2% over a wide range of momentum fractions.« less
How Bright is the Proton? A Precise Determination of the Photon Parton Distribution Function.
Manohar, Aneesh; Nason, Paolo; Salam, Gavin P; Zanderighi, Giulia
2016-12-09
It has become apparent in recent years that it is important, notably for a range of physics studies at the Large Hadron Collider, to have accurate knowledge on the distribution of photons in the proton. We show how the photon parton distribution function (PDF) can be determined in a model-independent manner, using electron-proton (ep) scattering data, in effect viewing the ep→e+X process as an electron scattering off the photon field of the proton. To this end, we consider an imaginary, beyond the Standard Model process with a flavor changing photon-lepton vertex. We write its cross section in two ways: one in terms of proton structure functions, the other in terms of a photon distribution. Requiring their equivalence yields the photon distribution as an integral over proton structure functions. As a result of the good precision of ep data, we constrain the photon PDF at the level of 1%-2% over a wide range of momentum fractions.
Modeling of particle radiative properties in coal combustion depending on burnout
NASA Astrophysics Data System (ADS)
Gronarz, Tim; Habermehl, Martin; Kneer, Reinhold
2017-04-01
In the present study, absorption and scattering efficiencies as well as the scattering phase function of a cloud of coal particles are described as function of the particle combustion progress. Mie theory for coated particles is applied as mathematical model. The scattering and absorption properties are determined by several parameters: size distribution, spectral distribution of incident radiation and spectral index of refraction of the particles. A study to determine the influence of each parameter is performed, finding that the largest effect is due to the refractive index, followed by the effect of size distribution. The influence of the incident radiation profile is negligible. As a part of this study, the possibility of applying a constant index of refraction is investigated. Finally, scattering and absorption efficiencies as well as the phase function are presented as a function of burnout with the presented model and the results are discussed.
Timing in a Variable Interval Procedure: Evidence for a Memory Singularity
Matell, Matthew S.; Kim, Jung S.; Hartshorne, Loryn
2013-01-01
Rats were trained in either a 30s peak-interval procedure, or a 15–45s variable interval peak procedure with a uniform distribution (Exp 1) or a ramping probability distribution (Exp 2). Rats in all groups showed peak shaped response functions centered around 30s, with the uniform group having an earlier and broader peak response function and rats in the ramping group having a later peak function as compared to the single duration group. The changes in these mean functions, as well as the statistics from single trial analyses, can be better captured by a model of timing in which memory is represented by a single, average, delay to reinforcement compared to one in which all durations are stored as a distribution, such as the complete memory model of Scalar Expectancy Theory or a simple associative model. PMID:24012783
A comparison of non-local electron transport models relevant to inertial confinement fusion
NASA Astrophysics Data System (ADS)
Sherlock, Mark; Brodrick, Jonathan; Ridgers, Christopher
2017-10-01
We compare the reduced non-local electron transport model developed by Schurtz et al. to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a 1-dimensional hohlraum ablation problem. We find the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced model reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
General relativistic magnetohydrodynamical κ-jet models for Sagittarius A*
NASA Astrophysics Data System (ADS)
Davelaar, J.; Mościbrodzka, M.; Bronzwaer, T.; Falcke, H.
2018-04-01
Context. The observed spectral energy distribution of an accreting supermassive black hole typically forms a power-law spectrum in the near infrared (NIR) and optical wavelengths, that may be interpreted as a signature of accelerated electrons along the jet. However, the details of acceleration remain uncertain. Aim. In this paper, we study the radiative properties of jets produced in axisymmetric general relativistic magnetohydrodynamics (GRMHD) simulations of hot accretion flows onto underluminous supermassive black holes both numerically and semi-analytically, with the aim of investigating the differences between models with and without accelerated electrons inside the jet. Methods: We assume that electrons are accelerated in the jet regions of our GRMHD simulation. To model them, we modify the electrons' distribution function in the jet regions from a purely relativistic thermal distribution to a combination of a relativistic thermal distribution and the κ-distribution function (the κ-distribution function is itself a combination of a relativistic thermal and a non-thermal power-law distribution, and thus it describes accelerated electrons). Inside the disk, we assume a thermal distribution for the electrons. In order to resolve the particle acceleration regions in the GRMHD simulations, we use a coordinate grid that is optimized for modeling jets. We calculate jet spectra and synchrotron maps by using the ray tracing code RAPTOR, and compare the synthetic observations to observations of Sgr A*. Finally, we compare numerical models of jets to semi-analytical ones. Results: We find that in the κ-jet models, the radio-emitting region size, radio flux, and spectral index in NIR/optical bands increase for decreasing values of the κ parameter, which corresponds to a larger amount of accelerated electrons. This is in agreement with analytical predictions. In our models, the size of the emission region depends roughly linearly on the observed wavelength λ, independently of the assumed distribution function. The model with κ = 3.5, ηacc = 5-10% (the percentage of electrons that are accelerated), and observing angle i = 30° fits the observed Sgr A* emission in the flaring state from the radio to the NIR/optical regimes, while κ = 3.5, ηacc < 1%, and observing angle i = 30° fit the upper limits in quiescence. At this point, our models (including the purely thermal ones) cannot reproduce the observed source sizes accurately, which is probably due to the assumption of axisymmetry in our GRMHD simulations. The κ-jet models naturally recover the observed nearly-flat radio spectrum of Sgr A* without invoking the somewhat artificial isothermal jet model that was suggested earlier. Conclusions: From our model fits we conclude that between 5% and 10% of the electrons inside the jet of Sgr A* are accelerated into a κ distribution function when Sgr A* is flaring. In quiescence, we match the NIR upper limits when this percentage is <1%.
NASA Astrophysics Data System (ADS)
Butler, Samuel D.; Marciniak, Michael A.
2014-09-01
Since the development of the Torrance-Sparrow bidirectional re ectance distribution function (BRDF) model in 1967, several BRDF models have been created. Previous attempts to categorize BRDF models have relied upon somewhat vague descriptors, such as empirical, semi-empirical, and experimental. Our approach is to instead categorize BRDF models based on functional form: microfacet normal distribution, geometric attenua- tion, directional-volumetric and Fresnel terms, and cross section conversion factor. Several popular microfacet models are compared to a standardized notation for a microfacet BRDF model. A library of microfacet model components is developed, allowing for creation of unique microfacet models driven by experimentally measured BRDFs.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
Spatial distribution of nuclei in progressive nucleation: Modeling and application
NASA Astrophysics Data System (ADS)
Tomellini, Massimo
2018-04-01
Phase transformations ruled by non-simultaneous nucleation and growth do not lead to random distribution of nuclei. Since nucleation is only allowed in the untransformed portion of space, positions of nuclei are correlated. In this article an analytical approach is presented for computing pair-correlation function of nuclei in progressive nucleation. This quantity is further employed for characterizing the spatial distribution of nuclei through the nearest neighbor distribution function. The modeling is developed for nucleation in 2D space with power growth law and it is applied to describe electrochemical nucleation where correlation effects are significant. Comparison with both computer simulations and experimental data lends support to the model which gives insights into the transition from Poissonian to correlated nearest neighbor probability density.
On the Mass Distribution of Animal Species
NASA Astrophysics Data System (ADS)
Redner, Sidney; Clauset, Aaron; Schwab, David
2009-03-01
We develop a simple diffusion-reaction model to account for the broad and asymmetric distribution of adult body masses for species within related taxonomic groups. The model assumes three basic evolutionary features that control body mass: (i) a fixed lower limit that is set by metabolic constraints, (ii) a species extinction risk that is a weakly increasing function of body mass, and (iii) cladogenetic diffusion, in which daughter species have a slight tendency toward larger mass. The steady-state solution for the distribution of species masses in this model can be expressed in terms of the Airy function. This solution gives mass distributions that are in good agreement with data on 4002 terrestrial mammal species from the late Quaternary and 8617 extant bird species.
A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.
Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W
2005-01-01
We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, W. Payton; Hokr, Milan; Shao, Hua
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Gardner, W. Payton; Hokr, Milan; Shao, Hua; ...
2016-10-19
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Parton distribution functions with QED corrections in the valon model
NASA Astrophysics Data System (ADS)
Mottaghizadeh, Marzieh; Taghavi Shahri, Fatemeh; Eslami, Parvin
2017-10-01
The parton distribution functions (PDFs) with QED corrections are obtained by solving the QCD ⊗QED DGLAP evolution equations in the framework of the "valon" model at the next-to-leading-order QCD and the leading-order QED approximations. Our results for the PDFs with QED corrections in this phenomenological model are in good agreement with the newly related CT14QED global fits code [Phys. Rev. D 93, 114015 (2016), 10.1103/PhysRevD.93.114015] and APFEL (NNPDF2.3QED) program [Comput. Phys. Commun. 185, 1647 (2014), 10.1016/j.cpc.2014.03.007] in a wide range of x =[10-5,1 ] and Q2=[0.283 ,108] GeV2 . The model calculations agree rather well with those codes. In the latter, we proposed a new method for studying the symmetry breaking of the sea quark distribution functions inside the proton.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
Electron distribution functions in electric field environments
NASA Technical Reports Server (NTRS)
Rudolph, Terence H.
1991-01-01
The amount of current carried by an electric discharge in its early stages of growth is strongly dependent on its geometrical shape. Discharges with a large number of branches, each funnelling current to a common stem, tend to carry more current than those with fewer branches. The fractal character of typical discharges was simulated using stochastic models based on solutions of the Laplace equation. Extension of these models requires the use of electron distribution functions to describe the behavior of electrons in the undisturbed medium ahead of the discharge. These electrons, interacting with the electric field, determine the propagation of branches in the discharge and the way in which further branching occurs. The first phase in the extension of the referenced models , the calculation of simple electron distribution functions in an air/electric field medium, is discussed. Two techniques are investigated: (1) the solution of the Boltzmann equation in homogeneous, steady state environments, and (2) the use of Monte Carlo simulations. Distribution functions calculated from both techniques are illustrated. Advantages and disadvantages of each technique are discussed.
Global Land Carbon Uptake from Trait Distributions
NASA Astrophysics Data System (ADS)
Butler, E. E.; Datta, A.; Flores-Moreno, H.; Fazayeli, F.; Chen, M.; Wythers, K. R.; Banerjee, A.; Atkin, O. K.; Kattge, J.; Reich, P. B.
2016-12-01
Historically, functional diversity in land surface models has been represented through a range of plant functional types (PFTs), each of which has a single value for all of its functional traits. Here we expand the diversity of the land surface by using a distribution of trait values for each PFT. The data for these trait distributions is from a sub-set of the global database of plant traits, TRY, and this analysis uses three leaf traits: mass based nitrogen and phosphorus content and specific leaf area, which influence both photosynthesis and respiration. The data are extrapolated into continuous surfaces through two methodologies. The first, a categorical method, classifies the species observed in TRY into satellite estimates of their plant functional type abundances - analogous to how traits are currently assigned to PFTs in land surface models. Second, a Bayesian spatial method which additionally estimates how the distribution of a trait changes in accord with both climate and soil covariates. These two methods produce distinct patterns of diversity which are incorporated into a land surface model to estimate how the range of trait values affects the global land carbon budget.
Model error estimation for distributed systems described by elliptic equations
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.
Conservative algorithms for non-Maxwellian plasma kinetics
Le, Hai P.; Cambier, Jean -Luc
2017-12-08
Here, we present a numerical model and a set of conservative algorithms for Non-Maxwellian plasma kinetics with inelastic collisions. These algorithms self-consistently solve for the time evolution of an isotropic electron energy distribution function interacting with an atomic state distribution function of an arbitrary number of levels through collisional excitation, deexcitation, as well as ionization and recombination. Electron-electron collisions, responsible for thermalization of the electron distribution, are also included in the model. The proposed algorithms guarantee mass/charge and energy conservation in a single step, and is applied to the case of non-uniform gridding of the energy axis in the phasemore » space of the electron distribution function. Numerical test cases are shown to demonstrate the accuracy of the method and its conservation properties.« less
Shizgal, Bernie D
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].
NASA Astrophysics Data System (ADS)
Shizgal, Bernie D.
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].
Modeling species-abundance relationships in multi-species collections
Peng, S.; Yin, Z.; Ren, H.; Guo, Q.
2003-01-01
Species-abundance relationship is one of the most fundamental aspects of community ecology. Since Motomura first developed the geometric series model to describe the feature of community structure, ecologists have developed many other models to fit the species-abundance data in communities. These models can be classified into empirical and theoretical ones, including (1) statistical models, i.e., negative binomial distribution (and its extension), log-series distribution (and its extension), geometric distribution, lognormal distribution, Poisson-lognormal distribution, (2) niche models, i.e., geometric series, broken stick, overlapping niche, particulate niche, random assortment, dominance pre-emption, dominance decay, random fraction, weighted random fraction, composite niche, Zipf or Zipf-Mandelbrot model, and (3) dynamic models describing community dynamics and restrictive function of environment on community. These models have different characteristics and fit species-abundance data in various communities or collections. Among them, log-series distribution, lognormal distribution, geometric series, and broken stick model have been most widely used.
NASA Technical Reports Server (NTRS)
Smith, O. E.
1976-01-01
The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.
2005-01-01
Cloud microphysics are inevitable affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds, Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effect of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, a detailed spectral-bin microphysical scheme was implemented into the the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bim microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions.
Species, functional groups, and thresholds in ecological resilience
Sundstrom, Shana M.; Allen, Craig R.; Barichievy, Chris
2012-01-01
The cross-scale resilience model states that ecological resilience is generated in part from the distribution of functions within and across scales in a system. Resilience is a measure of a system's ability to remain organized around a particular set of mutually reinforcing processes and structures, known as a regime. We define scale as the geographic extent over which a process operates and the frequency with which a process occurs. Species can be categorized into functional groups that are a link between ecosystem processes and structures and ecological resilience. We applied the cross-scale resilience model to avian species in a grassland ecosystem. A species’ morphology is shaped in part by its interaction with ecological structure and pattern, so animal body mass reflects the spatial and temporal distribution of resources. We used the log-transformed rank-ordered body masses of breeding birds associated with grasslands to identify aggregations and discontinuities in the distribution of those body masses. We assessed cross-scale resilience on the basis of 3 metrics: overall number of functional groups, number of functional groups within an aggregation, and the redundancy of functional groups across aggregations. We assessed how the loss of threatened species would affect cross-scale resilience by removing threatened species from the data set and recalculating values of the 3 metrics. We also determined whether more function was retained than expected after the loss of threatened species by comparing observed loss with simulated random loss in a Monte Carlo process. The observed distribution of function compared with the random simulated loss of function indicated that more functionality in the observed data set was retained than expected. On the basis of our results, we believe an ecosystem with a full complement of species can sustain considerable species losses without affecting the distribution of functions within and across aggregations, although ecological resilience is reduced. We propose that the mechanisms responsible for shaping discontinuous distributions of body mass and the nonrandom distribution of functions may also shape species losses such that local extinctions will be nonrandom with respect to the retention and distribution of functions and that the distribution of function within and across aggregations will be conserved despite extinctions.
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
Derivation of low flow frequency distributions under human activities and its implications
NASA Astrophysics Data System (ADS)
Gao, Shida; Liu, Pan; Pan, Zhengke; Ming, Bo; Guo, Shenglian; Xiong, Lihua
2017-06-01
Low flow, refers to a minimum streamflow in dry seasons, is crucial to water supply, agricultural irrigation and navigation. Human activities, such as groundwater pumping, influence low flow severely. In order to derive the low flow frequency distribution functions under human activities, this study incorporates groundwater pumping and return flow as variables in the recession process. Steps are as follows: (1) the original low flow without human activities is assumed to follow a Pearson type three distribution, (2) the probability distribution of climatic dry spell periods is derived based on a base flow recession model, (3) the base flow recession model is updated under human activities, and (4) the low flow distribution under human activities is obtained based on the derived probability distribution of dry spell periods and the updated base flow recession model. Linear and nonlinear reservoir models are used to describe the base flow recession, respectively. The Wudinghe basin is chosen for the case study, with daily streamflow observations during 1958-2000. Results show that human activities change the location parameter of the low flow frequency curve for the linear reservoir model, while alter the frequency distribution function for the nonlinear one. It is indicated that alter the parameters of the low flow frequency distribution is not always feasible to tackle the changing environment.
Belcour, Laurent; Pacanowski, Romain; Delahaie, Marion; Laville-Geay, Aude; Eupherte, Laure
2014-12-01
We compare the performance of various analytical retroreflecting bidirectional reflectance distribution function (BRDF) models to assess how they reproduce accurately measured data of retroreflecting materials. We introduce a new parametrization, the back vector parametrization, to analyze retroreflecting data, and we show that this parametrization better preserves the isotropy of data. Furthermore, we update existing BRDF models to improve the representation of retroreflective data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution.more » The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
A lower bound on the Milky Way mass from general phase-space distribution function models
NASA Astrophysics Data System (ADS)
Bratek, Łukasz; Sikora, Szymon; Jałocha, Joanna; Kutschera, Marek
2014-02-01
We model the phase-space distribution of the kinematic tracers using general, smooth distribution functions to derive a conservative lower bound on the total mass within ≈150-200 kpc. By approximating the potential as Keplerian, the phase-space distribution can be simplified to that of a smooth distribution of energies and eccentricities. Our approach naturally allows for calculating moments of the distribution function, such as the radial profile of the orbital anisotropy. We systematically construct a family of phase-space functions with the resulting radial velocity dispersion overlapping with the one obtained using data on radial motions of distant kinematic tracers, while making no assumptions about the density of the tracers and the velocity anisotropy parameter β regarded as a function of the radial variable. While there is no apparent upper bound for the Milky Way mass, at least as long as only the radial motions are concerned, we find a sharp lower bound for the mass that is small. In particular, a mass value of 2.4 × 1011 M⊙, obtained in the past for lower and intermediate radii, is still consistent with the dispersion profile at larger radii. Compared with much greater mass values in the literature, this result shows that determining the Milky Way mass is strongly model-dependent. We expect a similar reduction of mass estimates in models assuming more realistic mass profiles. Full Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/562/A134
Exact probability distribution functions for Parrondo's games
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
New approach in the quantum statistical parton distribution
NASA Astrophysics Data System (ADS)
Sohaily, Sozha; Vaziri (Khamedi), Mohammad
2017-12-01
An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.
Exact probability distribution functions for Parrondo's games.
Zadourian, Rubina; Saakian, David B; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Prokhorov, Alexander
2012-05-01
This paper proposes a three-component bidirectional reflectance distribution function (3C BRDF) model consisting of diffuse, quasi-specular, and glossy components for calculation of effective emissivities of blackbody cavities and then investigates the properties of the new reflection model. The particle swarm optimization method is applied for fitting a 3C BRDF model to measured BRDFs. The model is incorporated into the Monte Carlo ray-tracing algorithm for isothermal cavities. Finally, the paper compares the results obtained using the 3C model and the conventional specular-diffuse model of reflection.
The perturbed Sparre Andersen model with a threshold dividend strategy
NASA Astrophysics Data System (ADS)
Gao, Heli; Yin, Chuancun
2008-10-01
In this paper, we consider a Sparre Andersen model perturbed by diffusion with generalized Erlang(n)-distributed inter-claim times and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the mth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case where the inter-claim times are Erlang(2) distributed and the claim size distribution is exponential is considered in some details.
Continuum-kinetic approach to sheath simulations
NASA Astrophysics Data System (ADS)
Cagas, Petr; Hakim, Ammar; Srinivasan, Bhuvana
2016-10-01
Simulations of sheaths are performed using a novel continuum-kinetic model with collisions including ionization/recombination. A discontinuous Galerkin method is used to directly solve the Boltzmann-Poisson system to obtain a particle distribution function. Direct discretization of the distribution function has advantages of being noise-free compared to particle-in-cell methods. The distribution function, which is available at each node of the configuration space, can be readily used to calculate the collision integrals in order to get ionization and recombination operators. Analytical models are used to obtain the cross-sections as a function of energy. Results will be presented incorporating surface physics with a classical sheath in Hall thruster-relevant geometry. This work was sponsored by the Air Force Office of Scientific Research under Grant Number FA9550-15-1-0193.
Probability and Statistics in Sensor Performance Modeling
2010-12-01
language software program is called Environmental Awareness for Sensor and Emitter Employment. Some important numerical issues in the implementation...3 Statistical analysis for measuring sensor performance...complementary cumulative distribution function cdf cumulative distribution function DST decision-support tool EASEE Environmental Awareness of
Interpretation of environmental tracers in groundwater systems with stagnant water zones.
Maloszewski, Piotr; Stichler, Willibald; Zuber, Andrzej
2004-03-01
Lumped-parameter models are commonly applied for determining the age of water from time records of transient environmental tracers. The simplest models (e.g. piston flow or exponential) are also applicable for dating based on the decay or accumulation of tracers in groundwater systems. The models are based on the assumption that the transit time distribution function (exit age distribution function) of the tracer particles in the investigated system adequately represents the distribution of flow lines and is described by a simple function. A chosen or fitted function (called the response function) describes the transit time distribution of a tracer which would be observed at the output (discharge area, spring, stream, or pumping wells) in the case of an instantaneous injection at the entrance (recharge area). Due to large space and time scales, response functions are not measurable in groundwater systems, therefore, functions known from other fields of science, mainly from chemical engineering, are usually used. The type of response function and the values of its parameters define the lumped-parameter model of a system. The main parameter is the mean transit time of tracer through the system, which under favourable conditions may represent the mean age of mobile water. The parameters of the model are found by fitting calculated concentrations to the experimental records of concentrations measured at the outlet. The mean transit time of tracer (often called the tracer age), whether equal to the mean age of water or not, serves in adequate combinations with other data for determining other useful parameters, e.g. the recharge rate or the content of water in the system. The transit time distribution and its mean value serve for confirmation or determination of the conceptual model of the system and/or estimation of its potential vulnerability to anthropogenic pollution. In the interpretation of environmental tracer data with the aid of the lumped-parameter models, the influence of diffusion exchange between mobile water and stagnant or quasi-stagnant water is seldom considered, though it leads to large differences between tracer and water ages. Therefore, the article is focused on the transit time distribution functions of the most common lumped-parameter models, particularly those applicable for the interpretation of environmental tracer data in double-porosity aquifers, or aquifers in which aquitard diffusion may play an important role. A case study is recalled for a confined aquifer in which the diffusion exchange with aquitard most probably strongly influenced the transport of environmental tracers. Another case study presented is related to the interpretation of environmental tracer data obtained from lysimeters installed in the unsaturated zone with a fraction of stagnant water.
2014-09-01
the MLI coating, and similarly, the surface model as represented by the bidirectional reflectance distribution function ( BRDF ) will never be...surface model as represented by the bidirectional reflectance distribution function ( BRDF ) will never be identical to that found on actual space objects... BRDF model and how it compares to the Ashikhmin-Shirley BRDF [14] using similar nomenclature can be found in Ref. [15]. In this scenario, the state
Matthews, A P; Garenne, M L
2013-09-01
The matching algorithm in a dynamic marriage market model is described in this first of two companion papers. Iterative Proportional Fitting is used to find a marriage function (an age distribution of new marriages for both sexes), in a stable reference population, that is consistent with the one-sex age distributions of new marriages, and includes age preference. The one-sex age distributions (which are the marginals of the two-sex distribution) are based on the Picrate model, and age preference on a normal distribution, both of which may be adjusted by choice of parameter values. For a population that is perturbed from the reference state, the total number of new marriages is found as the harmonic mean of target totals for men and women obtained by applying reference population marriage rates to the perturbed population. The marriage function uses the age preference function, assumed to be the same for the reference and the perturbed populations, to distribute the total number of new marriages. The marriage function also has an availability factor that varies as the population changes with time, where availability depends on the supply of unmarried men and women. To simplify exposition, only first marriage is treated, and the algorithm is illustrated by application to Zambia. In the second paper, remarriage and dissolution are included. Copyright © 2013 Elsevier Inc. All rights reserved.
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-04-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-10-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
Loxley, P N
2017-10-01
The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.
Statistical thermodynamics of a two-dimensional relativistic gas.
Montakhab, Afshin; Ghodrat, Malihe; Barati, Mahmood
2009-03-01
In this paper we study a fully relativistic model of a two-dimensional hard-disk gas. This model avoids the general problems associated with relativistic particle collisions and is therefore an ideal system to study relativistic effects in statistical thermodynamics. We study this model using molecular-dynamics simulation, concentrating on the velocity distribution functions. We obtain results for x and y components of velocity in the rest frame (Gamma) as well as the moving frame (Gamma;{'}) . Our results confirm that Jüttner distribution is the correct generalization of Maxwell-Boltzmann distribution. We obtain the same "temperature" parameter beta for both frames consistent with a recent study of a limited one-dimensional model. We also address the controversial topic of temperature transformation. We show that while local thermal equilibrium holds in the moving frame, relying on statistical methods such as distribution functions or equipartition theorem are ultimately inconclusive in deciding on a correct temperature transformation law (if any).
Exact posterior computation in non-conjugate Gaussian location-scale parameters models
NASA Astrophysics Data System (ADS)
Andrade, J. A. A.; Rathie, P. N.
2017-12-01
In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi; Foudriat, E. C.
1991-01-01
A modeling tool for both analysis and design of distributed systems is discussed. Since many research institutions have access to networks of workstations, the researchers decided to build a tool running on top of the workstations to function as a prototype as well as a distributed simulator for a computing system. The effects of system modeling on performance prediction in distributed systems and the effect of static locking and deadlocks on the performance predictions of distributed transactions are also discussed. While the probability of deadlock is considerably small, its effects on performance could be significant.
Dynamics of a stochastic cell-to-cell HIV-1 model with distributed delay
NASA Astrophysics Data System (ADS)
Ji, Chunyan; Liu, Qun; Jiang, Daqing
2018-02-01
In this paper, we consider a stochastic cell-to-cell HIV-1 model with distributed delay. Firstly, we show that there is a global positive solution of this model before exploring its long-time behavior. Then sufficient conditions for extinction of the disease are established. Moreover, we obtain sufficient conditions for the existence of an ergodic stationary distribution of the model by constructing a suitable stochastic Lyapunov function. The stationary distribution implies that the disease is persistent in the mean. Finally, we provide some numerical examples to illustrate theoretical results.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Kurth, W. S.; Frank, L. A.; Gurnett, D. A.; Burek, B. G.; Ashour-Abdalla, M.
1980-01-01
Significant progress has been made in understanding intense electrostatic waves near the upper hybrid resonance frequency in terms of the theory of multiharmonic cyclotron emission using a classical loss-cone distribution function as a model. Recent observations by Hawkeye 1 and GEOS 1 have verified the existence of loss-cone distributions in association with the intense electrostatic wave events, however, other observations by Hawkeye and ISEE have indicated that loss cones are not always observable during the wave events, and in fact other forms of free energy may also be responsible for the instability. Now, for the first time, a positively sloped feature in the perpendicular distribution function has been uniquely identified with intense electrostatic wave activity. Correspondingly, we suggest that the theory is flexible under substantial modifications of the model distribution function.
The class of L ∩ D and its application to renewal reward process
NASA Astrophysics Data System (ADS)
Kamışlık, Aslı Bektaş; Kesemen, Tülay; Khaniyev, Tahir
2018-01-01
The class of L ∩ D is generated by intersection of two important subclasses of heavy tailed distributions: The long tailed distributions and dominated varying distributions. This class itself is also an important member of heavy tailed distributions and has some principal application areas especially in renewal, renewal reward and random walk processes. The aim of this study is to observe some well and less known results on renewal functions generated by the class of L ∩ D and apply them into a special renewal reward process which is known in the literature a semi Markovian inventory model of type (s, S). Especially we focused on Pareto distribution which belongs to the L ∩ D subclass of heavy tailed distributions. As a first step we obtained asymptotic results for renewal function generated by Pareto distribution from the class of L ∩ D using some well-known results by Embrechts and Omey [1]. Then we applied the results we obtained for Pareto distribution to renewal reward processes. As an application we investigate inventory model of type (s, S) when demands have Pareto distribution from the class of L ∩ D. We obtained asymptotic expansion for ergodic distribution function and finally we reached asymptotic expansion for nth order moments of distribution of this process.
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
The Equilibrium Allele Frequency Distribution for a Population with Reproductive Skew
Der, Ricky; Plotkin, Joshua B.
2014-01-01
We study the population genetics of two neutral alleles under reversible mutation in a model that features a skewed offspring distribution, called the Λ-Fleming–Viot process. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from this equilibrium distribution, but the form of the offspring distribution cannot itself always be so identified. We introduce an estimator for the mutation rate that is consistent, independent of the form of reproductive skew. We also introduce a two-allele infinite-sites version of the Λ-Fleming–Viot process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulas for the expected number of segregating sites as a function of sample size and offspring distribution. We find that the Wright–Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other Λ-processes. PMID:24473932
Studies on thermokinetic of Chlorella pyrenoidosa devolatilization via different models.
Chen, Zhihua; Lei, Jianshen; Li, Yunbei; Su, Xianfa; Hu, Zhiquan; Guo, Dabin
2017-11-01
The thermokinetics of Chlorella pyrenoidosa (CP) devolatilization were investigated based on iso-conversional model and different distributed activation energy models (DAEM). Iso-conversional process result showed that CP devolatilization roughly followed a single-step with mechanism function of f(α)=(1-α) 3 , and kinetic parameters pair of E 0 =180.5kJ/mol and A 0 =1.5E+13s -1 . Logistic distribution was the most suitable activation energy distribution function for CP devolatilization. Although reaction order n=3.3 was in accordance with iso-conversional process, Logistic DAEM could not detail the weight loss features since it presented as single-step reaction. The un-uniform feature of activation energy distribution in Miura-Maki DAEM, and weight fraction distribution in discrete DAEM reflected weight loss features. Due to the un-uniform distribution of activation and weight fraction, Miura-Maki DAEM and discreted DAEM could describe weight loss features. Copyright © 2017 Elsevier Ltd. All rights reserved.
A New Lifetime Distribution with Bathtube and Unimodal Hazard Function
NASA Astrophysics Data System (ADS)
Barriga, Gladys D. C.; Louzada-Neto, Francisco; Cancho, Vicente G.
2008-11-01
In this paper we propose a new lifetime distribution which accommodate bathtub-shaped, unimodal, increasing and decreasing hazard function. Some special particular cases are derived, including the standard Weibull distribution. Maximum likelihood estimation is considered for estimate the tree parameters present in the model. The methodology is illustrated in a real data set on industrial devices on a lite test.
Quang V. Cao; Shanna M. McCarty
2006-01-01
Diameter distributions in a forest stand have been successfully characterized by use of the Weibull function. Of special interest are cases where parameters of a Weibull distribution that models a future stand are predicted, either directly or indirectly, from current stand density and dominant height. This study evaluated four methods of predicting the Weibull...
The role of root distribution in eco-hydrological modeling in semi-arid regions
NASA Astrophysics Data System (ADS)
Sivandran, G.; Bras, R. L.
2010-12-01
In semi arid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Niche separation, through rooting strategies, is one manner in which different species coexist. At present, land surface models prescribe rooting profiles as a function of only the plant functional type of interest with no consideration for the soil texture or rainfall regime of the region being modeled. These models do not incorporate the ability of vegetation to dynamically alter their rooting strategies in response to transient changes in environmental forcings and therefore tend to underestimate the resilience of many of these ecosystems. A coupled, dynamic vegetation and hydrologic model, tRIBS+VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point scale simulations were carried out using two vertical root distribution schemes: (i) Static - a temporally invariant root distribution; and (ii) Dynamic - a temporally variable allocation of assimilated carbon at any depth within the root zone in order to minimize the soil moisture-induced stress on the vegetation. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semi-arid Walnut Gulch Experimental Watershed in Arizona. For the static root distribution scheme, a series of simulations were carried out varying the shape of the rooting profile. The optimal distribution for the simulation was defined as the root distribution with the maximum mean transpiration over a 200 year period. This optimal distribution was determined for 5 soil textures and using 2 plant functional types, and the results varied from case to case. The dynamic rooting simulations allow vegetation the freedom to adjust the allocation of assimilated carbon to different rooting depths in response to changes in stress caused by the redistribution and uptake of soil moisture. The results obtained from these experiments elucidate the strong link between plant functional type, soil texture and climate and highlight the potential errors in the modeling of hydrologic fluxes from imposing a static root profile.
A cross-correlation-based estimate of the galaxy luminosity function
NASA Astrophysics Data System (ADS)
van Daalen, Marcel P.; White, Martin
2018-06-01
We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.
NASA Technical Reports Server (NTRS)
Lie-Svendsen, O.; Leer, E.
1995-01-01
We have studied the evolution of the velocity distribution function of a test population of electrons in the solar corona and inner solar wind region, using a recently developed kinetic model. The model solves the time dependent, linear transport equation, with a Fokker-Planck collision operator to describe Coulomb collisions between the 'test population' and a thermal background of charged particles, using a finite differencing scheme. The model provides information on how non-Maxwellian features develop in the distribution function in the transition region from collision dominated to collisionless flow. By taking moments of the distribution the evolution of higher order moments, such as the heat flow, can be studied.
NASA Astrophysics Data System (ADS)
Palm, Juliane; Klaus, Julian; van Schaik, Loes; Zehe, Erwin; Schröder, Boris
2010-05-01
Soils provide central ecosystem functions in recycling nutrients, detoxifying harmful chemicals as well as regulating microclimate and local hydrological processes. The internal regulation of these functions and therefore the development of healthy and fertile soils mainly depend on the functional diversity of plants and animals. Soil organisms drive essential processes such as litter decomposition, nutrient cycling, water dynamics, and soil structure formation. Disturbances by different soil management practices (e.g., soil tillage, fertilization, pesticide application) affect the distribution and abundance of soil organisms and hence influence regulating processes. The strong relationship between environmental conditions and soil organisms gives us the opportunity to link spatiotemporal distribution patterns of indicator species with the potential provision of essential soil processes on different scales. Earthworms are key organisms for soil function and affect, among other things, water dynamics and solute transport in soils. Through their burrowing activity, earthworms increase the number of macropores by building semi-permanent burrow systems. In the unsaturated zone, earthworm burrows act as preferential flow pathways and affect water infiltration, surface-, subsurface- and matrix flow as well as the transport of water and solutes into deeper soil layers. Thereby different ecological earthworm types have different importance. Deep burrowing anecic earthworm species (e.g., Lumbricus terrestris) affect the vertical flow and thus increase the risk of potential contamination of ground water with agrochemicals. In contrast, horizontal burrowing endogeic (e.g., Aporrectodea caliginosa) and epigeic species (e.g., Lumbricus rubellus) increase water conductivity and the diffuse distribution of water and solutes in the upper soil layers. The question which processes are more relevant is pivotal for soil management and risk assessment. Thus, finding relevant environmental predictors which explain the distribution and dynamics of different ecological earthworm types can help us to understand where or when these processes are relevant in the landscape. Therefore, we develop species distribution models which are a useful tool to predict spatiotemporal distributions of earthworm occurrence and abundance under changing environmental conditions. On field scale, geostatistical distribution maps have shown that the spatial distribution of earthworms depends on soil parameters such as food supply, soil moisture, bulk density but with different patterns for earthworm stages (adult, juvenile) and ecological types (anecic, endogeic, epigeic). On landscape scales, earthworm distribution seems to be strongly controlled by management/disturbance-related factors. Our study shows different modelling approaches for predicting distribution patterns of earthworms in the Weiherbach area, an agricultural site in Kraichtal (Baden-Württemberg, Germany). We carried out field studies on arable fields differing in soil management practices (conventional, conservational), soil properties (organic matter content, texture, soil moisture), and topography (slope, elevation) in order to identify predictors for earthworm occurrence, abundance and biomass. Our earthworm distribution models consider all ecological groups as well as different life stages, accounting for the fact that the activity of juveniles is sometimes different from those of adults. Within our BIOPORE-project it is our final goal to couple our distribution models with population dynamic models and a preferential flow model to an integrated ecohydrological model to analyse feedbacks between earthworm engineering and transport characteristics affecting the functioning of (agro-) ecosystems.
Applicability of AgMERRA Forcing Dataset to Fill Gaps in Historical in-situ Meteorological Data
NASA Astrophysics Data System (ADS)
Bannayan, M.; Lashkari, A.; Zare, H.; Asadi, S.; Salehnia, N.
2015-12-01
Integrated assessment studies of food production systems use crop models to simulate the effects of climate and socio-economic changes on food security. Climate forcing data is one of those key inputs of crop models. This study evaluated the performance of AgMERRA climate forcing dataset to fill gaps in historical in-situ meteorological data for different climatic regions of Iran. AgMERRA dataset intercompared with in- situ observational dataset for daily maximum and minimum temperature and precipitation during 1980-2010 periods via Root Mean Square error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) for 17 stations in four climatic regions included humid and moderate, cold, dry and arid, hot and humid. Moreover, probability distribution function and cumulative distribution function compared between model and observed data. The results of measures of agreement between AgMERRA data and observed data demonstrated that there are small errors in model data for all stations. Except for stations which are located in cold regions, model data in the other stations illustrated under-prediction for daily maximum temperature and precipitation. However, it was not significant. In addition, probability distribution function and cumulative distribution function showed the same trend for all stations between model and observed data. Therefore, the reliability of AgMERRA dataset is high to fill gaps in historical observations in different climatic regions of Iran as well as it could be applied as a basis for future climate scenarios.
Bug Distribution and Pattern Classification.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.
The study examines the rule space model, a probabilistic model capable of measuring cognitive skill acquisition and of diagnosing erroneous rules of operation in a procedural domain. The model involves two important components: (1) determination of a set of bug distributions (bug density functions representing clusters around the rules); and (2)…
MODELING CHLORINE RESIDUALS IN DRINKING-WATER DISTRIBUTION SYSTEMS
A mass-transfer-based model is developed for predicting chlorine decay in drinking-water distribution networks. The model considers first-order reactions of chlorine to occur both in the bulk flow and at the pipe wall. The overall rate of the wall reaction is a function of the ...
MODELING CHLORINE RESIDUALS IN DRINKING-WATER DISTRIBUTION SYSTEMS
A mass transfer-based model is developed for predicting chlorine decay in drinking water distribution networks. he model considers first order reactions of chlorine to occur both in the bulk flow and at the pipe wall. he overall rate of the wall reaction is a function of the rate...
Non-Fickian dispersion of groundwater age
Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.
2014-01-01
We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651
NASA Technical Reports Server (NTRS)
Kimes, D. S.
1984-01-01
The directional-reflectance distributions of radiant flux from homogeneous vegetation canopies with greater than 90 percent ground cover are analyzed with a radiative-transfer model. The model assumes that the leaves consist of small finite planes with Lambertian properties. Four theoretical canopies with different leaf-orientation distributions were studied: erectophile, spherical, planophile, and heliotropic canopies. The directional-reflectance distributions from the model closely resemble reflectance distributions measured in the field. The physical scattering mechanisms operating in the model explain the variations observed in the reflectance distributions as a function of leaf-orientation distribution, solar zenith angle, and leaf transmittance and reflectance. The simulated reflectance distribution show unique characteristics for each canopy. The basic understanding of the physical scattering properties of the different canopy geometries gained in this study provide a basis for developing techniques to infer leaf-orientation distributions of vegetation canopies from directional remote-sensing measurements.
Modelling Root Systems Using Oriented Density Distributions
NASA Astrophysics Data System (ADS)
Dupuy, Lionel X.
2011-09-01
Root architectural models are essential tools to understand how plants access and utilize soil resources during their development. However, root architectural models use complex geometrical descriptions of the root system and this has limitations to model interactions with the soil. This paper presents the development of continuous models based on the concept of oriented density distribution function. The growth of the root system is built as a hierarchical system of partial differential equations (PDEs) that incorporate single root growth parameters such as elongation rate, gravitropism and branching rate which appear explicitly as coefficients of the PDE. Acquisition and transport of nutrients are then modelled by extending Darcy's law to oriented density distribution functions. This framework was applied to build a model of the growth and water uptake of barley root system. This study shows that simplified and computer effective continuous models of the root system development can be constructed. Such models will allow application of root growth models at field scale.
Mao, Zhun; Saint-André, Laurent; Bourrier, Franck; Stokes, Alexia; Cordonnier, Thomas
2015-01-01
Background and Aims In mountain ecosystems, predicting root density in three dimensions (3-D) is highly challenging due to the spatial heterogeneity of forest communities. This study presents a simple and semi-mechanistic model, named ChaMRoots, that predicts root interception density (RID, number of roots m–2). ChaMRoots hypothesizes that RID at a given point is affected by the presence of roots from surrounding trees forming a polygon shape. Methods The model comprises three sub-models for predicting: (1) the spatial heterogeneity – RID of the finest roots in the top soil layer as a function of tree basal area at breast height, and the distance between the tree and a given point; (2) the diameter spectrum – the distribution of RID as a function of root diameter up to 50 mm thick; and (3) the vertical profile – the distribution of RID as a function of soil depth. The RID data used for fitting in the model were measured in two uneven-aged mountain forest ecosystems in the French Alps. These sites differ in tree density and species composition. Key Results In general, the validation of each sub-model indicated that all sub-models of ChaMRoots had good fits. The model achieved a highly satisfactory compromise between the number of aerial input parameters and the fit to the observed data. Conclusions The semi-mechanistic ChaMRoots model focuses on the spatial distribution of root density at the tree cluster scale, in contrast to the majority of published root models, which function at the level of the individual. Based on easy-to-measure characteristics, simple forest inventory protocols and three sub-models, it achieves a good compromise between the complexity of the case study area and that of the global model structure. ChaMRoots can be easily coupled with spatially explicit individual-based forest dynamics models and thus provides a highly transferable approach for modelling 3-D root spatial distribution in complex forest ecosystems. PMID:26173892
Stability of hot electron plasma in the ELMO bumpy torus
NASA Astrophysics Data System (ADS)
Tsang, K. T.; Cheng, C. Z.
The stability of a hot electron plasma in the ELMO Bumpy Torus was investigated using two different models. In the first model, where the hot electron distribution function is assumed to be a delta function in the perpendicular velocity, a stability boundary in addition to those discussed by Nelson and by Van Dam and Lee is found. In the second model, where the hot electron distribution function is assumed to be a Maxwellian in the perpendicular velocity, stability boundaries significantly different from those of the first model are found. Coupling of the Nelson-Van Dam-Lee mode to the compressional Alfven mode is now possible. This leads to a higher permissible core plasma beta value for stable operation.
A Hierarchy of Heuristic-Based Models of Crowd Dynamics
NASA Astrophysics Data System (ADS)
Degond, P.; Appert-Rolland, C.; Moussaïd, M.; Pettré, J.; Theraulaz, G.
2013-09-01
We derive a hierarchy of kinetic and macroscopic models from a noisy variant of the heuristic behavioral Individual-Based Model of Ngai et al. (Disaster Med. Public Health Prep. 3:191-195,
Impact of geometrical properties on permeability and fluid phase distribution in porous media
NASA Astrophysics Data System (ADS)
Lehmann, P.; Berchtold, M.; Ahrenholz, B.; Tölke, J.; Kaestner, A.; Krafczyk, M.; Flühler, H.; Künsch, H. R.
2008-09-01
To predict fluid phase distribution in porous media, the effect of geometric properties on flow processes must be understood. In this study, we analyze the effect of volume, surface, curvature and connectivity (the four Minkowski functionals) on the hydraulic conductivity and the water retention curve. For that purpose, we generated 12 artificial structures with 800 3 voxels (the units of a 3D image) and compared them with a scanned sand sample of the same size. The structures were generated with a Boolean model based on a random distribution of overlapping ellipsoids whose size and shape were chosen to fulfill the criteria of the measured functionals. The pore structure of sand material was mapped with X-rays from synchrotrons. To analyze the effect of geometry on water flow and fluid distribution we carried out three types of analysis: Firstly, we computed geometrical properties like chord length, distance from the solids, pore size distribution and the Minkowski functionals as a function of pore size. Secondly, the fluid phase distribution as a function of the applied pressure was calculated with a morphological pore network model. Thirdly, the permeability was determined using a state-of-the-art lattice-Boltzmann method. For the simulated structure with the true Minkowski functionals the pores were larger and the computed air-entry value of the artificial medium was reduced to 85% of the value obtained from the scanned sample. The computed permeability for the geometry with the four fitted Minkowski functionals was equal to the permeability of the scanned image. The permeability was much more sensitive to the volume and surface than to curvature and connectivity of the medium. We conclude that the Minkowski functionals are not sufficient to characterize the geometrical properties of a porous structure that are relevant for the distribution of two fluid phases. Depending on the procedure to generate artificial structures with predefined Minkowski functionals, structures differing in pore size distribution can be obtained.
Development of a distributed air pollutant dry deposition modeling framework
Satoshi Hirabayashi; Charles N. Kroll; David J. Nowak
2012-01-01
A distributed air pollutant dry deposition modeling systemwas developed with a geographic information system (GIS) to enhance the functionality of i-Tree Eco (i-Tree, 2011). With the developed system, temperature, leaf area index (LAI) and air pollutant concentration in a spatially distributed form can be estimated, and based on these and other input variables, dry...
Mapping local and global variability in plant trait distributions
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc; ...
2017-12-01
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
Mapping local and global variability in plant trait distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
Correlation functions in first-order phase transitions
NASA Astrophysics Data System (ADS)
Garrido, V.; Crespo, D.
1997-09-01
Most of the physical properties of systems underlying first-order phase transitions can be obtained from the spatial correlation functions. In this paper, we obtain expressions that allow us to calculate all the correlation functions from the droplet size distribution. Nucleation and growth kinetics is considered, and exact solutions are obtained for the case of isotropic growth by using self-similarity properties. The calculation is performed by using the particle size distribution obtained by a recently developed model (populational Kolmogorov-Johnson-Mehl-Avrami model). Since this model is less restrictive than that used in previously existing theories, the result is that the correlation functions can be obtained for any dependence of the kinetic parameters. The validity of the method is tested by comparison with the exact correlation functions, which had been obtained in the available cases by the time-cone method. Finally, the correlation functions corresponding to the microstructure developed in partitioning transformations are obtained.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2014-12-01
The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m-1|⪡1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; ...
2017-04-13
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
Statistical methods for investigating quiescence and other temporal seismicity patterns
Matthews, M.V.; Reasenberg, P.A.
1988-01-01
We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
NASA Astrophysics Data System (ADS)
Wu, H.; Zhou, L.; Xu, T.; Fang, W. L.; He, W. G.; Liu, H. M.
2017-11-01
In order to improve the situation of voltage violation caused by the grid-connection of photovoltaic (PV) system in a distribution network, a bi-level programming model is proposed for battery energy storage system (BESS) deployment. The objective function of inner level programming is to minimize voltage violation, with the power of PV and BESS as the variables. The objective function of outer level programming is to minimize the comprehensive function originated from inner layer programming and all the BESS operating parameters, with the capacity and rated power of BESS as the variables. The differential evolution (DE) algorithm is applied to solve the model. Based on distribution network operation scenarios with photovoltaic generation under multiple alternative output modes, the simulation results of IEEE 33-bus system prove that the deployment strategy of BESS proposed in this paper is well adapted to voltage violation regulation invariable distribution network operation scenarios. It contributes to regulating voltage violation in distribution network, as well as to improve the utilization of PV systems.
NASA Astrophysics Data System (ADS)
Oliveira, José J.
2017-10-01
In this paper, we investigate the global convergence of solutions of non-autonomous Hopfield neural network models with discrete time-varying delays, infinite distributed delays, and possible unbounded coefficient functions. Instead of using Lyapunov functionals, we explore intrinsic features between the non-autonomous systems and their asymptotic systems to ensure the boundedness and global convergence of the solutions of the studied models. Our results are new and complement known results in the literature. The theoretical analysis is illustrated with some examples and numerical simulations.
NASA Astrophysics Data System (ADS)
Merdan, Ziya; Karakuş, Özlem
2016-11-01
The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.
Dynamics of pulsatile flow in fractal models of vascular branching networks.
Bui, Anh; Sutalo, Ilija D; Manasseh, Richard; Liffman, Kurt
2009-07-01
Efficient regulation of blood flow is critically important to the normal function of many organs, especially the brain. To investigate the circulation of blood in complex, multi-branching vascular networks, a computer model consisting of a virtual fractal model of the vasculature and a mathematical model describing the transport of blood has been developed. Although limited by some constraints, in particular, the use of simplistic, uniformly distributed model for cerebral vasculature and the omission of anastomosis, the proposed computer model was found to provide insights into blood circulation in the cerebral vascular branching network plus the physiological and pathological factors which may affect its functionality. The numerical study conducted on a model of the middle cerebral artery region signified the important effects of vessel compliance, blood viscosity variation as a function of the blood hematocrit, and flow velocity profile on the distributions of flow and pressure in the vascular network.
Towards a model of pion generalized parton distributions from Dyson-Schwinger equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moutarde, H.
2015-04-10
We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.
The beta distribution: A statistical model for world cloud cover
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.
Eddington's demon: inferring galaxy mass functions and other distributions from uncertain data
NASA Astrophysics Data System (ADS)
Obreschkow, D.; Murray, S. G.; Robotham, A. S. G.; Westmeier, T.
2018-03-01
We present a general modified maximum likelihood (MML) method for inferring generative distribution functions from uncertain and biased data. The MML estimator is identical to, but easier and many orders of magnitude faster to compute than the solution of the exact Bayesian hierarchical modelling of all measurement errors. As a key application, this method can accurately recover the mass function (MF) of galaxies, while simultaneously dealing with observational uncertainties (Eddington bias), complex selection functions and unknown cosmic large-scale structure. The MML method is free of binning and natively accounts for small number statistics and non-detections. Its fast implementation in the R-package dftools is equally applicable to other objects, such as haloes, groups, and clusters, as well as observables other than mass. The formalism readily extends to multidimensional distribution functions, e.g. a Choloniewski function for the galaxy mass-angular momentum distribution, also handled by dftools. The code provides uncertainties and covariances for the fitted model parameters and approximate Bayesian evidences. We use numerous mock surveys to illustrate and test the MML method, as well as to emphasize the necessity of accounting for observational uncertainties in MFs of modern galaxy surveys.
The tensor distribution function.
Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M
2009-01-01
Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.
A second generation distributed point polarizable water model.
Kumar, Revati; Wang, Fang-Fang; Jenness, Glen R; Jordan, Kenneth D
2010-01-07
A distributed point polarizable model (DPP2) for water, with explicit terms for charge penetration, induction, and charge transfer, is introduced. The DPP2 model accurately describes the interaction energies in small and large water clusters and also gives an average internal energy per molecule and radial distribution functions of liquid water in good agreement with experiment. A key to the success of the model is its accurate description of the individual terms in the n-body expansion of the interaction energies.
NASA Astrophysics Data System (ADS)
Lu, Yu; Mo, H. J.; Katz, Neal; Weinberg, Martin D.
2012-04-01
We conduct Bayesian model inferences from the observed K-band luminosity function of galaxies in the local Universe, using the semi-analytic model (SAM) of galaxy formation introduced in Lu et al. The prior distributions for the 14 free parameters include a large range of possible models. We find that some of the free parameters, e.g. the characteristic scales for quenching star formation in both high-mass and low-mass haloes, are already tightly constrained by the single data set. The posterior distribution includes the model parameters adopted in other SAMs. By marginalizing over the posterior distribution, we make predictions that include the full inferential uncertainties for the colour-magnitude relation, the Tully-Fisher relation, the conditional stellar mass function of galaxies in haloes of different masses, the H I mass function, the redshift evolution of the stellar mass function of galaxies and the global star formation history. Using posterior predictive checking with the available observational results, we find that the model family (i) predicts a Tully-Fisher relation that is curved; (ii) significantly overpredicts the satellite fraction; (iii) vastly overpredicts the H I mass function; (iv) predicts high-z stellar mass functions that have too many low-mass galaxies and too few high-mass ones and (v) predicts a redshift evolution of the stellar mass density and the star formation history that are in moderate disagreement. These results suggest that some important processes are still missing in the current model family, and we discuss a number of possible solutions to solve the discrepancies, such as interactions between galaxies and dark matter haloes, tidal stripping, the bimodal accretion of gas, preheating and a redshift-dependent initial mass function.
Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A
2010-07-01
Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.
Dai, Qi; Yang, Yanchun; Wang, Tianming
2008-10-15
Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.
NASA Astrophysics Data System (ADS)
Gardner, W. P.
2017-12-01
A model which simulates tracer concentration in surface water as a function the age distribution of groundwater discharge is used to characterize groundwater flow systems at a variety of spatial scales. We develop the theory behind the model and demonstrate its application in several groundwater systems of local to regional scale. A 1-D stream transport model, which includes: advection, dispersion, gas exchange, first-order decay and groundwater inflow is coupled a lumped parameter model that calculates the concentration of environmental tracers in discharging groundwater as a function of the groundwater residence time distribution. The lumped parameters, which describe the residence time distribution, are allowed to vary spatially, and multiple environmental tracers can be simulated. This model allows us to calculate the longitudinal profile of tracer concentration in streams as a function of the spatially variable groundwater age distribution. By fitting model results to observations of stream chemistry and discharge, we can then estimate the spatial distribution of groundwater age. The volume of groundwater discharge to streams can be estimated using a subset of environmental tracers, applied tracers, synoptic stream gauging or other methods, and the age of groundwater then estimated using the previously calculated groundwater discharge and observed environmental tracer concentrations. Synoptic surveys of SF6, CFC's, 3H and 222Rn, along with measured stream discharge are used to estimate the groundwater inflow distribution and mean age for regional scale surveys of the Berland River in west-central Alberta. We find that groundwater entering the Berland has observable age, and that the age estimated using our stream survey is of similar order to limited samples from groundwater wells in the region. Our results show that the stream can be used as an easily accessible location to constrain the regional scale spatial distribution of groundwater age.
The joint fit of the BHMF and ERDF for the BAT AGN Sample
NASA Astrophysics Data System (ADS)
Weigel, Anna K.; Koss, Michael; Ricci, Claudio; Trakhtenbrot, Benny; Oh, Kyuseok; Schawinski, Kevin; Lamperti, Isabella
2018-01-01
A natural product of an AGN survey is the AGN luminosity function. This statistical measure describes the distribution of directly measurable AGN luminosities. Intrinsically, the shape of the luminosity function depends on the distribution of black hole masses and Eddington ratios. To constrain these fundamental AGN properties, the luminosity function thus has to be disentangled into the black hole mass and Eddington ratio distribution function. The BASS survey is unique as it allows such a joint fit for a large number of local AGN, is unbiased in terms of obscuration in the X-rays and provides black hole masses for type-1 and type-2 AGN. The black hole mass function at z ~ 0 represents an essential baseline for simulations and black hole growth models. The normalization of the Eddington ratio distribution function directly constrains the AGN fraction. Together, the BASS AGN luminosity, black hole mass and Eddington ratio distribution functions thus provide a complete picture of the local black hole population.
Wavefronts for a global reaction-diffusion population model with infinite distributed delay
NASA Astrophysics Data System (ADS)
Weng, Peixuan; Xu, Zhiting
2008-09-01
We consider a global reaction-diffusion population model with infinite distributed delay which includes models of Nicholson's blowflies and hematopoiesis derived by Gurney, Mackey and Glass, respectively. The existence of monotone wavefronts is derived by using the abstract settings of functional differential equations and Schauder fixed point theory.
A hybrid model of biased inductively coupled discharges1
NASA Astrophysics Data System (ADS)
Wen, Deqi; Lieberman, Michael A.; Zhang, Quanzhi; Liu, Yongxin; Wang, Younian
2016-09-01
A hybrid model, i.e. a global model coupled bidirectionally with a parallel Monte-Carlo collision (MCC) sheath model, is developed to investigate an inductively coupled discharge with a bias source. To validate this model, both bulk plasma density and ion energy distribution functions (IEDFs) are compared with experimental measurements in an argon discharge, and a good agreement is obtained. On this basis, the model is extended to weakly electronegative Ar/O2 plasma. The ion energy and angular distribution functions versus bias voltage amplitude are examined. The different ion species (Ar+, O2+,O+) have various behaviors because of the different masses. A low bias voltage, Ar+ has a single energy peak distribution and O+ has a bimodal distribution. At high bias voltage, the energy peak separation of O+ is wider than Ar+. 1This work has been supported by the National Nature Science Foundation of China (Grant No. 11335004) and Specific project (Grant No 2011X02403-001) and partially supported by Department of Energy Office of Fusion Energy Science Contract DE-SC000193 and a gift from the Lam Research Corporation.
VizieR Online Data Catalog: Tracers of the Milky Way mass (Bratek+, 2014)
NASA Astrophysics Data System (ADS)
Bratek, L.; Sikora, S.; Jalocha, J.; Kutschera, M.
2013-11-01
We model the phase-space distribution of the kinematic tracers using general, smooth distribution functions to derive a conservative lower bound on the total mass within ~~150-200kpc. By approximating the potential as Keplerian, the phase-space distribution can be simplified to that of a smooth distribution of energies and eccentricities. Our approach naturally allows for calculating moments of the distribution function, such as the radial profile of the orbital anisotropy. We systematically construct a family of phase-space functions with the resulting radial velocity dispersion overlapping with the one obtained using data on radial motions of distant kinematic tracers, while making no assumptions about the density of the tracers and the velocity anisotropy parameter β regarded as a function of the radial variable. While there is no apparent upper bound for the Milky Way mass, at least as long as only the radial motions are concerned, we find a sharp lower bound for the mass that is small. In particular, a mass value of 2.4x1011M⊙, obtained in the past for lower and intermediate radii, is still consistent with the dispersion profile at larger radii. Compared with much greater mass values in the literature, this result shows that determining the Milky Way mass is strongly model-dependent. We expect a similar reduction of mass estimates in models assuming more realistic mass profiles. (1 data file).
NASA Astrophysics Data System (ADS)
Wang, Feng; Pang, Wenning; Duffy, Patrick
2012-12-01
Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the calculated orbitals.
Numerical Simulation of Abandoned Gob Methane Drainage through Surface Vertical Wells
Hu, Guozhong
2015-01-01
The influence of the ventilation system on the abandoned gob weakens, so the gas seepage characteristics in the abandoned gob are significantly different from those in a normal mining gob. In connection with this, this study physically simulated the movement of overlying rock strata. A spatial distribution function for gob permeability was derived. A numerical model using FLUENT for abandoned gob methane drainage through surface wells was established, and the derived spatial distribution function for gob permeability was imported into the numerical model. The control range of surface wells, flow patterns and distribution rules for static pressure in the abandoned gob under different well locations were determined using the calculated results from the numerical model. PMID:25955438
NASA Technical Reports Server (NTRS)
Fitzenreiter, R. J.; Scudder, J. D.; Klimas, A. J.
1990-01-01
A model which is consistent with the solar wind and shock surface boundary conditions for the foreshock electron distribution in the absence of wave-particle effects is formulated for an arbitrary location behind the magnetic tangent to the earth's bow shock. Variations of the gyrophase-averaged velocity distribution are compared and contrasted with in situ ISEE observations. It is found that magnetic mirroring of solar wind electrons is the most important process by which nonmonotonic reduced electron distributions in the foreshock are produced. Leakage of particles from the magnetosheath is shown to be relatively unimportant in determining reduced distributions that are nonmonotonic. The two-dimensional distribution function off the magnetic field direction is the crucial contribution in producing reduced distributions which have beams. The time scale for modification of the electron velocity distribution in velocity space can be significantly influenced by steady state spatial gradients in the background imposed by the curved shock geometry.
Modeling pore corrosion in normally open gold- plated copper connectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien
2008-09-01
The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less
NASA Astrophysics Data System (ADS)
Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald
2017-04-01
The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison
Heavy residues from very mass asymmetric heavy ion reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanold, Karl Alan
1994-08-01
The isotopic production cross sections and momenta of all residues with nuclear charge (Z) greater than 39 from the reaction of 26, 40, and 50 MeV/nucleon 129Xe + Be, C, and Al were measured. The isotopic cross sections, the momentum distribution for each isotope, and the cross section as a function of nuclear charge and momentum are presented here. The new cross sections are consistent with previous measurements of the cross sections from similar reaction systems. The shape of the cross section distribution, when considered as a function of Z and velocity, was found to be qualitatively consistent with thatmore » expected from an incomplete fusion reaction mechanism. An incomplete fusion model coupled to a statistical decay model is able to reproduce many features of these reactions: the shapes of the elemental cross section distributions, the emission velocity distributions for the intermediate mass fragments, and the Z versus velocity distributions. This model gives a less satisfactory prediction of the momentum distribution for each isotope. A very different model based on the Boltzman-Nordheim-Vlasov equation and which was also coupled to a statistical decay model reproduces many features of these reactions: the shapes of the elemental cross section distributions, the intermediate mass fragment emission velocity distributions, and the Z versus momentum distributions. Both model calculations over-estimate the average mass for each element by two mass units and underestimate the isotopic and isobaric widths of the experimental distributions. It is shown that the predicted average mass for each element can be brought into agreement with the data by small, but systematic, variation of the particle emission barriers used in the statistical model. The predicted isotopic and isobaric widths of the cross section distributions can not be brought into agreement with the experimental data using reasonable parameters for the statistical model.« less
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
NASA Astrophysics Data System (ADS)
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
Gothe, Emma; Sandin, Leonard; Allen, Craig R.; Angeler, David G.
2014-01-01
The distribution of functional traits within and across spatiotemporal scales has been used to quantify and infer the relative resilience across ecosystems. We use explicit spatial modeling to evaluate within- and cross-scale redundancy in headwater streams, an ecosystem type with a hierarchical and dendritic network structure. We assessed the cross-scale distribution of functional feeding groups of benthic invertebrates in Swedish headwater streams during two seasons. We evaluated functional metrics, i.e., Shannon diversity, richness, and evenness, and the degree of redundancy within and across modeled spatial scales for individual feeding groups. We also estimated the correlates of environmental versus spatial factors of both functional composition and the taxonomic composition of functional groups for each spatial scale identified. Measures of functional diversity and within-scale redundancy of functions were similar during both seasons, but both within- and cross-scale redundancy were low. This apparent low redundancy was partly attributable to a few dominant taxa explaining the spatial models. However, rare taxa with stochastic spatial distributions might provide additional information and should therefore be considered explicitly for complementing future resilience assessments. Otherwise, resilience may be underestimated. Finally, both environmental and spatial factors correlated with the scale-specific functional and taxonomic composition. This finding suggests that resilience in stream networks emerges as a function of not only local conditions but also regional factors such as habitat connectivity and invertebrate dispersal.
Grassmann phase space theory and the Jaynes-Cummings model
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Garraway, B. M.; Jeffers, J.; Barnett, S. M.
2013-07-01
The Jaynes-Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherent state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes-Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker-Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker-Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes-Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker-Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker-Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions-that are also equivalent to the canonical Grassmann distribution function-to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum-atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes-Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum-atom optics.
NASA Technical Reports Server (NTRS)
Boudreau, R. D.
1973-01-01
A numerical model is developed which calculates the atmospheric corrections to infrared radiometric measurements due to absorption and emission by water vapor, carbon dioxide, and ozone. The corrections due to aerosols are not accounted for. The transmissions functions for water vapor, carbon dioxide, and water are given. The model requires as input the vertical distribution of temperature and water vapor as determined by a standard radiosonde. The vertical distribution of carbon dioxide is assumed to be constant. The vertical distribution of ozone is an average of observed values. The model also requires as input the spectral response function of the radiometer and the nadir angle at which the measurements were made. A listing of the FORTRAN program is given with details for its use and examples of input and output listings. Calculations for four model atmospheres are presented.
Nachman, Gösta
2006-01-01
The spatial distributions of two-spotted spider mites Tetranychus urticae and their natural enemy, the phytoseiid predator Phytoseiulus persimilis, were studied on six full-grown cucumber plants. Both mite species were very patchily distributed and P. persimilis tended to aggregate on leaves with abundant prey. The effects of non-homogenous distributions and degree of spatial overlap between prey and predators on the per capita predation rate were studied by means of a stage-specific predation model that averages the predation rates over all the local populations inhabiting the individual leaves. The empirical predation rates were compared with predictions assuming random predator search and/or an even distribution of prey. The analysis clearly shows that the ability of the predators to search non-randomly increases their predation rate. On the other hand, the prey may gain if it adopts a more even distribution when its density is low and a more patchy distribution when density increases. Mutual interference between searching predators reduces the predation rate, but the effect is negligible. The stage-specific functional response model was compared with two simpler models without explicit stage structure. Both unstructured models yielded predictions that were quite similar to those of the stage-structured model.
NASA Technical Reports Server (NTRS)
Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C..; Nielsen, J. E.
1999-01-01
We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE] on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [about 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.
NASA Technical Reports Server (NTRS)
Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C.; Nielsen, J. E.
1999-01-01
We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE) on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [approximately 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.
Price sensitive demand with random sales price - a newsboy problem
NASA Astrophysics Data System (ADS)
Sankar Sana, Shib
2012-03-01
Up to now, many newsboy problems have been considered in the stochastic inventory literature. Some assume that stochastic demand is independent of selling price (p) and others consider the demand as a function of stochastic shock factor and deterministic sales price. This article introduces a price-dependent demand with stochastic selling price into the classical Newsboy problem. The proposed model analyses the expected average profit for a general distribution function of p and obtains an optimal order size. Finally, the model is discussed for various appropriate distribution functions of p and illustrated with numerical examples.
Robust inference in the negative binomial regression model with an application to falls data.
Aeberhard, William H; Cantoni, Eva; Heritier, Stephane
2014-12-01
A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.
Superstatistics model for T₂ distribution in NMR experiments on porous media.
Correia, M D; Souza, A M; Sinnecker, J P; Sarthour, R S; Santos, B C C; Trevizan, W; Oliveira, I S
2014-07-01
We propose analytical functions for T2 distribution to describe transverse relaxation in high- and low-fields NMR experiments on porous media. The method is based on a superstatistics theory, and allows to find the mean and standard deviation of T2, directly from measurements. It is an alternative to multiexponential models for data decay inversion in NMR experiments. We exemplify the method with q-exponential functions and χ(2)-distributions to describe, respectively, data decay and T2 distribution on high-field experiments of fully water saturated glass microspheres bed packs, sedimentary rocks from outcrop and noisy low-field experiment on rocks. The method is general and can also be applied to biological systems. Copyright © 2014 Elsevier Inc. All rights reserved.
Estimation of d- 2 H Breakup Neutron Energy Distributions From d- 3 He
Hoop, B.; Grimes, S. M.; Drosg, M.
2017-06-19
A method is described to estimate deuteron-on-deuteron breakup neutron distributions at 0° using deuterium bombardment of 3He. Break-up neutron distributions are modeled with the product of a Fermi-Dirac distribution and a cumulative logistic distribution function. Four measured break-up neutron distributions from 6.15- to 12.0-MeV deuterons on 3He are compared with thirteen measured distributions from 6.83- to 11.03-MeV deuterons on deuterium. Model pararmeters that describe d -3He neutron distributions are used to estimate neutron distributions from 6- to 12-MeV deuterons on deuterium.
Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan
2016-10-01
A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.
Rajeswaran, Jeevanantham; Blackstone, Eugene H; Barnard, John
2018-07-01
In many longitudinal follow-up studies, we observe more than one longitudinal outcome. Impaired renal and liver functions are indicators of poor clinical outcomes for patients who are on mechanical circulatory support and awaiting heart transplant. Hence, monitoring organ functions while waiting for heart transplant is an integral part of patient management. Longitudinal measurements of bilirubin can be used as a marker for liver function and glomerular filtration rate for renal function. We derive an approximation to evolution of association between these two organ functions using a bivariate nonlinear mixed effects model for continuous longitudinal measurements, where the two submodels are linked by a common distribution of time-dependent latent variables and a common distribution of measurement errors.
NASA Astrophysics Data System (ADS)
Hysell, D. L.; Varney, R. H.; Vlasov, M. N.; Nossa, E.; Watkins, B.; Pedersen, T.; Huba, J. D.
2012-02-01
The electron energy distribution during an F region ionospheric modification experiment at the HAARP facility near Gakona, Alaska, is inferred from spectrographic airglow emission data. Emission lines at 630.0, 557.7, and 844.6 nm are considered along with the absence of detectable emissions at 427.8 nm. Estimating the electron energy distribution function from the airglow data is a problem in classical linear inverse theory. We describe an augmented version of the method of Backus and Gilbert which we use to invert the data. The method optimizes the model resolution, the precision of the mapping between the actual electron energy distribution and its estimate. Here, the method has also been augmented so as to limit the model prediction error. Model estimates of the suprathermal electron energy distribution versus energy and altitude are incorporated in the inverse problem formulation as representer functions. Our methodology indicates a heater-induced electron energy distribution with a broad peak near 5 eV that decreases approximately exponentially by 30 dB between 5-50 eV.
Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin
2008-05-01
For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.
Arbour, J H; López-Fernández, H
2014-11-01
Morphological, lineage and ecological diversity can vary substantially even among closely related lineages. Factors that influence morphological diversification, especially in functionally relevant traits, can help to explain the modern distribution of disparity across phylogenies and communities. Multivariate axes of feeding functional morphology from 75 species of Neotropical cichlid and a stepwise-AIC algorithm were used to estimate the adaptive landscape of functional morphospace in Cichlinae. Adaptive landscape complexity and convergence, as well as the functional diversity of Cichlinae, were compared with expectations under null evolutionary models. Neotropical cichlid feeding function varied primarily between traits associated with ram feeding vs. suction feeding/biting and secondarily with oral jaw muscle size and pharyngeal crushing capacity. The number of changes in selective regimes and the amount of convergence between lineages was higher than expected under a null model of evolution, but convergence was not higher than expected under a similarly complex adaptive landscape. Functional disparity was compatible with an adaptive landscape model, whereas the distribution of evolutionary change through morphospace corresponded with a process of evolution towards a single adaptive peak. The continentally distributed Neotropical cichlids have evolved relatively rapidly towards a number of adaptive peaks in functional trait space. Selection in Cichlinae functional morphospace is more complex than expected under null evolutionary models. The complexity of selective constraints in feeding morphology has likely been a significant contributor to the diversity of feeding ecology in this clade. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.
2004-01-01
Cloud microphysics are inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles (i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail). Each type is described by a special size distribution function containing many categories (i.e. 33 bins). Atmospheric aerosols are also described using number density size-distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep cloud systems in the west Pacific warm pool region, in the sub-tropics (Florida) and in the mid-latitude using identical thermodynamic conditions but with different concentrations of CCN: a low 'clean' concentration and a high 'dirty' concentration.
Control of Networked Traffic Flow Distribution - A Stochastic Distribution System Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Aziz, H M Abdul; Young, Stan
Networked traffic flow is a common scenario for urban transportation, where the distribution of vehicle queues either at controlled intersections or highway segments reflect the smoothness of the traffic flow in the network. At signalized intersections, the traffic queues are controlled by traffic signal control settings and effective traffic lights control would realize both smooth traffic flow and minimize fuel consumption. Funded by the Energy Efficient Mobility Systems (EEMS) program of the Vehicle Technologies Office of the US Department of Energy, we performed a preliminary investigation on the modelling and control framework in context of urban network of signalized intersections.more » In specific, we developed a recursive input-output traffic queueing models. The queue formation can be modeled as a stochastic process where the number of vehicles entering each intersection is a random number. Further, we proposed a preliminary B-Spline stochastic model for a one-way single-lane corridor traffic system based on theory of stochastic distribution control.. It has been shown that the developed stochastic model would provide the optimal probability density function (PDF) of the traffic queueing length as a dynamic function of the traffic signal setting parameters. Based upon such a stochastic distribution model, we have proposed a preliminary closed loop framework on stochastic distribution control for the traffic queueing system to make the traffic queueing length PDF follow a target PDF that potentially realizes the smooth traffic flow distribution in a concerned corridor.« less
Fast-ion distributions from third harmonic ICRF heating studied with neutron emission spectroscopy
NASA Astrophysics Data System (ADS)
Hellesen, C.; Gatu Johnson, M.; Andersson Sundén, E.; Conroy, S.; Ericsson, G.; Eriksson, J.; Sjöstrand, H.; Weiszflog, M.; Johnson, T.; Gorini, G.; Nocente, M.; Tardocchi, M.; Kiptily, V. G.; Pinches, S. D.; Sharapov, S. E.; EFDA Contributors, JET
2013-11-01
The fast-ion distribution from third harmonic ion cyclotron resonance frequency (ICRF) heating on the Joint European Torus is studied using neutron emission spectroscopy with the time-of-flight spectrometer TOFOR. The energy dependence of the fast deuteron distribution function is inferred from the measured spectrum of neutrons born in DD fusion reactions, and the inferred distribution is compared with theoretical models for ICRF heating. Good agreements between modelling and measurements are seen with clear features in the fast-ion distribution function, that are due to the finite Larmor radius of the resonating ions, replicated. Strong synergetic effects between ICRF and neutral beam injection heating were also seen. The total energy content of the fast-ion population derived from TOFOR data was in good agreement with magnetic measurements for values below 350 kJ.
Magnetopause modeling - Flux transfer events and magnetosheath quasi-trapped distributions
NASA Technical Reports Server (NTRS)
Speiser, T. W.; Williams, D. J.
1982-01-01
Three-dimensional distribution functions for energetic ions are studied numerically in the magnetosphere, through the magnetopause, and in the magnetosheath using a simple one-dimensional quasi-static model and ISEE 1 magnetopause crossing data for November 10, 1977. Quasi-trapped populations in the magnetosheath observed near flux transfer events (FTEs) are investigated, and it is shown that the population in the sheath appears to sandwich the FTE distributions. These quasi-trapped distributions are due to slow, large pitch angle, outward moving particles left behind by the outward rush of the ions more field-aligned at the time the flux was opened. It is found that sheath convective flows can map along the connected flux tube without drastically changing the distribution function, and results suggest that localized tangential fields above the upper limit may exist.
NASA Astrophysics Data System (ADS)
Filinov, A.; Bonitz, M.; Loffhagen, D.
2018-06-01
A new combination of first principle molecular dynamics (MD) simulations with a rate equation model presented in the preceding paper (paper I) is applied to analyze in detail the scattering of argon atoms from a platinum (111) surface. The combined model is based on a classification of all atom trajectories according to their energies into trapped, quasi-trapped and scattering states. The number of particles in each of the three classes obeys coupled rate equations. The coefficients in the rate equations are the transition probabilities between these states which are obtained from MD simulations. While these rates are generally time-dependent, after a characteristic time scale t E of several tens of picoseconds they become stationary allowing for a rather simple analysis. Here, we investigate this time scale by analyzing in detail the temporal evolution of the energy distribution functions of the adsorbate atoms. We separately study the energy loss distribution function of the atoms and the distribution function of in-plane and perpendicular energy components. Further, we compute the sticking probability of argon atoms as a function of incident energy, angle and lattice temperature. Our model is important for plasma-surface modeling as it allows to extend accurate simulations to longer time scales.
Augmenting aquatic species sensitivity distributions with interspecies toxicity estimation models
Species sensitivity distributions (SSD) are cumulative distribution functions of species toxicity values. The SSD approach is increasingly being used in ecological risk assessment, but is often limited by available toxicity data necessary for diverse species representation. In ...
Model of bidirectional reflectance distribution function for metallic materials
NASA Astrophysics Data System (ADS)
Wang, Kai; Zhu, Jing-Ping; Liu, Hong; Hou, Xun
2016-09-01
Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials.
Foliage Density Distribution and Prediction of Intensively Managed Loblolly Pine
Yujia Zhang; Bruce E. Borders; Rodney E. Will; Hector De Los Santos Posadas
2004-01-01
The pipe model theory says that foliage biomass is proportional to the sapwood area at the base of the live crown. This knowledge was incorporated in an effort to develop a foliage biomass prediction model from integrating a stipulated foliage biomass distribution function within the crown. This model was parameterized using data collected from intensively managed...
Brasil, L S; Juen, L; Batista, J D; Pavan, M G; Cabette, H S R
2014-10-01
We demonstrate that the distribution of the functional feeding groups of aquatic insects is related to hierarchical patch dynamics. Patches are sites with unique environmental and functional characteristics that are discontinuously distributed in time and space within a lotic system. This distribution predicts that the occurrence of species will be based predominantly on their environmental requirements. We sampled three streams within the same drainage basin in the Brazilian Cerrado savanna, focusing on waterfalls and associated habitats (upstream, downstream), representing different functional zones. We collected 2,636 specimens representing six functional feeding groups (FFGs): brushers, collector-gatherers, collector-filterers, shredders, predators, and scrapers. The frequency of occurrence of these groups varied significantly among environments. This variation appeared to be related to the distinct characteristics of the different habitat patches, which led us to infer that the hierarchical patch dynamics model can best explain the distribution of functional feeding groups in minor lotic environments, such as waterfalls.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tao; Li, Cheng; Huang, Can
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Ding, Tao; Li, Cheng; Huang, Can; ...
2017-01-09
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution
NASA Astrophysics Data System (ADS)
Zhao, X.; Suganuma, Y.; Fujii, M.
2017-12-01
Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.
NASA Astrophysics Data System (ADS)
Selakovic, S.; Cozzoli, F.; Leuven, J.; Van Braeckel, A.; Speybroeck, J.; Kleinhans, M. G.; Bouma, T.
2017-12-01
Interactions between organisms and landscape forming processes play an important role in evolution of coastal landscapes. In particular, biota has a strong potential to interact with important geomorphological processes such as sediment dynamics. Although many studies worked towards quantifying the impact of different species groups on sediment dynamics, information has been gathered on an ad hoc base. Depending on species' traits and distribution, functional groups of ecoengineering species may have differential effects on sediment deposition and erosion. We hypothesize that the spatial distributions of sediment-stabilizing and destabilizing species across the channel and along the whole salinity gradient of an estuary partly determine the planform shape and channel-shoal morphology of estuaries. To test this hypothesis, we analyze vegetation and macrobenthic data taking the Scheldt river-estuarine continuum as model ecosystem. We identify species traits with important effects on sediment dynamics and use them to form functional groups. By using linearized mixed modelling, we are able to accurately describe the distributions of the different functional groups. We observe a clear distinction of dominant ecosystem engineering functional groups and their potential effects on the sediment in the river-estuarine continuum. The first results of longitudinal cross section show the highest effects of stabilizing plant species in riverine and sediment bioturbators in weak polyhaline part of continuum. The distribution of functional groups in transverse cross sections shows dominant stabilizing effect in supratidal zone compared to dominant destabilizing effect in the lower intertidal zone. This analysis offers a new and more general conceptualization of distributions of sediment stabilizing and destabilizing functional groups and their potential impacts on sediment dynamics, shoal patterns, and planform shapes in river-estuarine continuum. We intend to test this in future modelling and experiments.
From Whole-Brain Data to Functional Circuit Models: The Zebrafish Optomotor Response.
Naumann, Eva A; Fitzgerald, James E; Dunn, Timothy W; Rihel, Jason; Sompolinsky, Haim; Engert, Florian
2016-11-03
Detailed descriptions of brain-scale sensorimotor circuits underlying vertebrate behavior remain elusive. Recent advances in zebrafish neuroscience offer new opportunities to dissect such circuits via whole-brain imaging, behavioral analysis, functional perturbations, and network modeling. Here, we harness these tools to generate a brain-scale circuit model of the optomotor response, an orienting behavior evoked by visual motion. We show that such motion is processed by diverse neural response types distributed across multiple brain regions. To transform sensory input into action, these regions sequentially integrate eye- and direction-specific sensory streams, refine representations via interhemispheric inhibition, and demix locomotor instructions to independently drive turning and forward swimming. While experiments revealed many neural response types throughout the brain, modeling identified the dimensions of functional connectivity most critical for the behavior. We thus reveal how distributed neurons collaborate to generate behavior and illustrate a paradigm for distilling functional circuit models from whole-brain data. Copyright © 2016 Elsevier Inc. All rights reserved.
A scattering model for forested area
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1988-01-01
A forested area is modeled as a volume of randomly oriented and distributed disc-shaped, or needle-shaped leaves shading a distribution of branches modeled as randomly oriented finite-length, dielectric cylinders above an irregular soil surface. Since the radii of branches have a wide range of sizes, the model only requires the length of a branch to be large compared with its radius which may be any size relative to the incident wavelength. In addition, the model also assumes the thickness of a disc-shaped leaf or the radius of a needle-shaped leaf is much smaller than the electromagnetic wavelength. The scattering phase matrices for disc, needle, and cylinder are developed in terms of the scattering amplitudes of the corresponding fields which are computed by the forward scattering theorem. These quantities along with the Kirchoff scattering model for a randomly rough surface are used in the standard radiative transfer formulation to compute the backscattering coefficient. Numerical illustrations for the backscattering coefficient are given as a function of the shading factor, incidence angle, leaf orientation distribution, branch orientation distribution, and the number density of leaves. Also illustrated are the properties of the extinction coefficient as a function of leaf and branch orientation distributions. Comparisons are made with measured backscattering coefficients from forested areas reported in the literature.
Statistical self-similarity of width function maxima with implications to floods
Veitzer, S.A.; Gupta, V.K.
2001-01-01
Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.
Population patterns in World’s administrative units
Miramontes, Pedro; Cocho, Germinal
2017-01-01
Whereas there has been an extended discussion concerning city population distribution, little has been said about that of administrative divisions. In this work, we investigate the population distribution of second-level administrative units of 150 countries and territories and propose the discrete generalized beta distribution (DGBD) rank-size function to describe the data. After testing the balance between the goodness of fit and number of parameters of this function compared with a power law, which is the most common model for city population, the DGBD is a good statistical model for 96% of our datasets and preferred over a power law in almost every case. Moreover, the DGBD is preferred over a power law for fitting country population data, which can be seen as the zeroth-level administrative unit. We present a computational toy model to simulate the formation of administrative divisions in one dimension and give numerical evidence that the DGBD arises from a particular case of this model. This model, along with the fitting of the DGBD, proves adequate in reproducing and describing local unit evolution and its effect on the population distribution. PMID:28791153
Quark fragmentation functions in NJL-jet model
NASA Astrophysics Data System (ADS)
Bentz, Wolfgang; Matevosyan, Hrayr; Thomas, Anthony
2014-09-01
We report on our studies of quark fragmentation functions in the Nambu-Jona-Lasinio (NJL) - jet model. The results of Monte-Carlo simulations for the fragmentation functions to mesons and nucleons, as well as to pion and kaon pairs (dihadron fragmentation functions) are presented. The important role of intermediate vector meson resonances for those semi-inclusive deep inelastic production processes is emphasized. Our studies are very relevant for the extraction of transverse momentum dependent quark distribution functions from measured scattering cross sections. We report on our studies of quark fragmentation functions in the Nambu-Jona-Lasinio (NJL) - jet model. The results of Monte-Carlo simulations for the fragmentation functions to mesons and nucleons, as well as to pion and kaon pairs (dihadron fragmentation functions) are presented. The important role of intermediate vector meson resonances for those semi-inclusive deep inelastic production processes is emphasized. Our studies are very relevant for the extraction of transverse momentum dependent quark distribution functions from measured scattering cross sections. Supported by Grant in Aid for Scientific Research, Japanese Ministry of Education, Culture, Sports, Science and Technology, Project No. 20168769.
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
A probabilistic approach to photovoltaic generator performance prediction
NASA Astrophysics Data System (ADS)
Khallat, M. A.; Rahman, S.
1986-09-01
A method for predicting the performance of a photovoltaic (PV) generator based on long term climatological data and expected cell performance is described. The equations for cell model formulation are provided. Use of the statistical model for characterizing the insolation level is discussed. The insolation data is fitted to appropriate probability distribution functions (Weibull, beta, normal). The probability distribution functions are utilized to evaluate the capacity factors of PV panels or arrays. An example is presented revealing the applicability of the procedure.
Current sheet in plasma as a system with a controlling parameter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fridman, Yu. A., E-mail: yulya-fridman@yandex.ru; Chukbar, K. V., E-mail: Chukbar-KV@nrcki.ru
2015-08-15
A simple kinetic model describing stationary solutions with bifurcated and single-peaked current density profiles of a plane electron beam or current sheet in plasma is presented. A connection is established between the two-dimensional constructions arising in terms of the model and the one-dimensional considerations by Bernstein−Greene−Kruskal facilitating the reconstruction of the distribution function of trapped particles when both the profile of the electric potential and the free particles distribution function are known.
Predicting extinctions as a result of climate change
Mark W. Schwartz; Louis R. Iverson; Anantha M. Prasad; Stephen N. Matthews; Raymond J. O' Connor; Raymond J. O' Connor
2006-01-01
Widespread extinction is a predicted ecological consequence of global warming. Extinction risk under climate change scenarios is a function of distribution breadth. Focusing on trees and birds of the eastern United States, we used joint climate and environment models to examine fit and climate change vulnerability as a function of distribution breadth. We found that...
Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.
Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael
2018-04-01
The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.
Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong
2017-11-20
A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.
Revisiting the Landau fluid closure.
NASA Astrophysics Data System (ADS)
Hunana, P.; Zank, G. P.; Webb, G. M.; Adhikari, L.
2017-12-01
Advanced fluid models that are much closer to the full kinetic description than the usual magnetohydrodynamic description are a very useful tool for studying astrophysical plasmas and for interpreting solar wind observational data. The development of advanced fluid models that contain certain kinetic effects is complicated and has attracted much attention over the past years. Here we focus on fluid models that incorporate the simplest possible forms of Landau damping, derived from linear kinetic theory expanded about a leading-order (gyrotropic) bi-Maxwellian distribution function f_0, under the approximation that the perturbed distribution function f_1 is gyrotropic as well. Specifically, we focus on various Pade approximants to the usual plasma response function (and to the plasma dispersion function) and examine possibilities that lead to a closure of the linear kinetic hierarchy of fluid moments. We present re-examination of the simplest Landau fluid closures.
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed
2018-01-01
In this paper, we develop and study a stochastic predator-prey model with stage structure for predator and Holling type II functional response. First of all, by constructing a suitable stochastic Lyapunov function, we establish sufficient conditions for the existence and uniqueness of an ergodic stationary distribution of the positive solutions to the model. Then, we obtain sufficient conditions for extinction of the predator populations in two cases, that is, the first case is that the prey population survival and the predator populations extinction; the second case is that all the prey and predator populations extinction. The existence of a stationary distribution implies stochastic weak stability. Numerical simulations are carried out to demonstrate the analytical results.
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing; Hayat, Tasawar; Alsaedi, Ahmed
2018-06-01
In this paper, we develop and study a stochastic predator-prey model with stage structure for predator and Holling type II functional response. First of all, by constructing a suitable stochastic Lyapunov function, we establish sufficient conditions for the existence and uniqueness of an ergodic stationary distribution of the positive solutions to the model. Then, we obtain sufficient conditions for extinction of the predator populations in two cases, that is, the first case is that the prey population survival and the predator populations extinction; the second case is that all the prey and predator populations extinction. The existence of a stationary distribution implies stochastic weak stability. Numerical simulations are carried out to demonstrate the analytical results.
NASA Astrophysics Data System (ADS)
Heumann, B. W.; Guichard, F.; Seaquist, J. W.
2005-05-01
The HABSEED model uses remote sensing derived NPP as a surrogate for habitat quality as the driving mechanism for population growth and local seed dispersal. The model has been applied to the Sahel region of Africa. Results show that the functional response of plants to habitat quality alters population distribution. Plants more tolerant of medium quality habitat have greater distributions to the North while plants requiring only the best habitat are limited to the South. For all functional response types, increased seed production results in diminishing returns. Functional response types have been related to life history tradeoffs and r-K strategies based on the results. Results are compared to remote sensing derived vegetation land cover.
A comparative study of mixture cure models with covariate
NASA Astrophysics Data System (ADS)
Leng, Oh Yit; Khalid, Zarina Mohd
2017-05-01
In survival analysis, the survival time is assumed to follow a non-negative distribution, such as the exponential, Weibull, and log-normal distributions. In some cases, the survival time is influenced by some observed factors. The absence of these observed factors may cause an inaccurate estimation in the survival function. Therefore, a survival model which incorporates the influences of observed factors is more appropriate to be used in such cases. These observed factors are included in the survival model as covariates. Besides that, there are cases where a group of individuals who are cured, that is, not experiencing the event of interest. Ignoring the cure fraction may lead to overestimate in estimating the survival function. Thus, a mixture cure model is more suitable to be employed in modelling survival data with the presence of a cure fraction. In this study, three mixture cure survival models are used to analyse survival data with a covariate and a cure fraction. The first model includes covariate in the parameterization of the susceptible individuals survival function, the second model allows the cure fraction to depend on covariate, and the third model incorporates covariate in both cure fraction and survival function of susceptible individuals. This study aims to compare the performance of these models via a simulation approach. Therefore, in this study, survival data with varying sample sizes and cure fractions are simulated and the survival time is assumed to follow the Weibull distribution. The simulated data are then modelled using the three mixture cure survival models. The results show that the three mixture cure models are more appropriate to be used in modelling survival data with the presence of cure fraction and an observed factor.
Ray tracing the Wigner distribution function for optical simulations
NASA Astrophysics Data System (ADS)
Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Joerg; Urbach, Paul
2018-01-01
We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems but produces unphysical results in the presence of aberrations. The cause of these anomalies is explained using an analytical model.
Influence of particle size distribution on reflected and transmitted light from clouds.
Kattawar, G W; Plass, G N
1968-05-01
The light reflected and transmitted from clouds with various drop size distributions is calculated by a Monte Carlo technique. Six different models are used for the drop size distribution: isotropic, Rayleigh, haze continental, haze maritime, cumulus, and nimbostratus. The scattering function for each model is calculated from the Mie theory. In general, the reflected and transmitted radiances for the isotropic and Rayleigh models tend to be similar, as are those for the various haze and cloud models. The reflected radiance is less for the haze and cloud models than for the isotropic and Rayleigh models/except for an angle of incidence near the horizon when it is larger around the incident beam direction. The transmitted radiance is always much larger for the haze and cloud models near the incident direction; at distant angles it is less for small and moderate optical thicknesses and greater for large optical thicknesses (all comparisons to isotropic and Rayleigh models). The downward flux, cloud albedo, and ean optical path are discussed. The angular spread of the beam as a function of optical thickness is shown for the nimbostratus model.
On push-forward representations in the standard gyrokinetic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyato, N., E-mail: miyato.naoaki@jaea.go.jp; Yagi, M.; Scott, B. D.
2015-01-15
Two representations of fluid moments in terms of a gyro-center distribution function and gyro-center coordinates, which are called push-forward representations, are compared in the standard electrostatic gyrokinetic model. In the representation conventionally used to derive the gyrokinetic Poisson equation, the pull-back transformation of the gyro-center distribution function contains effects of the gyro-center transformation and therefore electrostatic potential fluctuations, which is described by the Poisson brackets between the distribution function and scalar functions generating the gyro-center transformation. Usually, only the lowest order solution of the generating function at first order is considered to explicitly derive the gyrokinetic Poisson equation. This ismore » true in explicitly deriving representations of scalar fluid moments with polarization terms. One also recovers the particle diamagnetic flux at this order because it is associated with the guiding-center transformation. However, higher-order solutions are needed to derive finite Larmor radius terms of particle flux including the polarization drift flux from the conventional representation. On the other hand, the lowest order solution is sufficient for the other representation, in which the gyro-center transformation part is combined with the guiding-center one and the pull-back transformation of the distribution function does not appear.« less
Dempsey, Steven J; Gese, Eric M; Kluever, Bryan M; Lonsinger, Robert C; Waits, Lisette P
2015-01-01
Development and evaluation of noninvasive methods for monitoring species distribution and abundance is a growing area of ecological research. While noninvasive methods have the advantage of reduced risk of negative factors associated with capture, comparisons to methods using more traditional invasive sampling is lacking. Historically kit foxes (Vulpes macrotis) occupied the desert and semi-arid regions of southwestern North America. Once the most abundant carnivore in the Great Basin Desert of Utah, the species is now considered rare. In recent decades, attempts have been made to model the environmental variables influencing kit fox distribution. Using noninvasive scat deposition surveys for determination of kit fox presence, we modeled resource selection functions to predict kit fox distribution using three popular techniques (Maxent, fixed-effects, and mixed-effects generalized linear models) and compared these with similar models developed from invasive sampling (telemetry locations from radio-collared foxes). Resource selection functions were developed using a combination of landscape variables including elevation, slope, aspect, vegetation height, and soil type. All models were tested against subsequent scat collections as a method of model validation. We demonstrate the importance of comparing multiple model types for development of resource selection functions used to predict a species distribution, and evaluating the importance of environmental variables on species distribution. All models we examined showed a large effect of elevation on kit fox presence, followed by slope and vegetation height. However, the invasive sampling method (i.e., radio-telemetry) appeared to be better at determining resource selection, and therefore may be more robust in predicting kit fox distribution. In contrast, the distribution maps created from the noninvasive sampling (i.e., scat transects) were significantly different than the invasive method, thus scat transects may be appropriate when used in an occupancy framework to predict species distribution. We concluded that while scat deposition transects may be useful for monitoring kit fox abundance and possibly occupancy, they do not appear to be appropriate for determining resource selection. On our study area, scat transects were biased to roadways, while data collected using radio-telemetry was dictated by movements of the kit foxes themselves. We recommend that future studies applying noninvasive scat sampling should consider a more robust random sampling design across the landscape (e.g., random transects or more complete road coverage) that would then provide a more accurate and unbiased depiction of resource selection useful to predict kit fox distribution.
Distribution pattern of public transport passenger in Yogyakarta, Indonesia
NASA Astrophysics Data System (ADS)
Narendra, Alfa; Malkhamah, Siti; Sopha, Bertha Maya
2018-03-01
The arrival and departure distribution pattern of Trans Jogja bus passenger is one of the fundamental model for simulation. The purpose of this paper is to build models of passengers flows. This research used passengers data from January to May 2014. There is no policy that change the operation system affecting the nature of this pattern nowadays. The roads, buses, land uses, schedule, and people are relatively still the same. The data then categorized based on the direction, days, and location. Moreover, each category was fitted into some well-known discrete distributions. Those distributions are compared based on its AIC value and BIC. The chosen distribution model has the smallest AIC and BIC value and the negative binomial distribution found has the smallest AIC and BIC value. Probability mass function (PMF) plots of those models were compared to draw generic model from each categorical negative binomial distribution models. The value of accepted generic negative binomial distribution is 0.7064 and 1.4504 of mu. The minimum and maximum passenger vector value of distribution are is 0 and 41.
Grassmann phase space theory and the Jaynes–Cummings model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalton, B.J., E-mail: bdalton@swin.edu.au; Centre for Atom Optics and Ultrafast Spectroscopy, Swinburne University of Technology, Melbourne, Victoria 3122; Garraway, B.M.
2013-07-15
The Jaynes–Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherentmore » state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes–Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker–Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker–Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes–Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker–Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker–Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions–that are also equivalent to the canonical Grassmann distribution function–to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum–atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes–Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum–atom optics. -- Highlights: •Novel phase space theory of the Jaynes–Cummings model using Grassmann variables. •Fokker–Planck equations solved analytically. •Results agree with the standard quantum optics treatment. •Grassmann phase space theory applicable to fermion many-body problems.« less
USDA-ARS?s Scientific Manuscript database
This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...
Remote-sensing based approach to forecast habitat quality under climate change scenarios.
Requena-Mullor, Juan M; López, Enrique; Castro, Antonio J; Alcaraz-Segura, Domingo; Castro, Hermelindo; Reyes, Andrés; Cabello, Javier
2017-01-01
As climate change is expected to have a significant impact on species distributions, there is an urgent challenge to provide reliable information to guide conservation biodiversity policies. In addressing this challenge, we propose a remote sensing-based approach to forecast the future habitat quality for European badger, a species not abundant and at risk of local extinction in the arid environments of southeastern Spain, by incorporating environmental variables related with the ecosystem functioning and correlated with climate and land use. Using ensemble prediction methods, we designed global spatial distribution models for the distribution range of badger using presence-only data and climate variables. Then, we constructed regional models for an arid region in the southeast Spain using EVI (Enhanced Vegetation Index) derived variables and weighting the pseudo-absences with the global model projections applied to this region. Finally, we forecast the badger potential spatial distribution in the time period 2071-2099 based on IPCC scenarios incorporating the uncertainty derived from the predicted values of EVI-derived variables. By including remotely sensed descriptors of the temporal dynamics and spatial patterns of ecosystem functioning into spatial distribution models, results suggest that future forecast is less favorable for European badgers than not including them. In addition, change in spatial pattern of habitat suitability may become higher than when forecasts are based just on climate variables. Since the validity of future forecast only based on climate variables is currently questioned, conservation policies supported by such information could have a biased vision and overestimate or underestimate the potential changes in species distribution derived from climate change. The incorporation of ecosystem functional attributes derived from remote sensing in the modeling of future forecast may contribute to the improvement of the detection of ecological responses under climate change scenarios.
Remote-sensing based approach to forecast habitat quality under climate change scenarios
Requena-Mullor, Juan M.; López, Enrique; Castro, Antonio J.; Alcaraz-Segura, Domingo; Castro, Hermelindo; Reyes, Andrés; Cabello, Javier
2017-01-01
As climate change is expected to have a significant impact on species distributions, there is an urgent challenge to provide reliable information to guide conservation biodiversity policies. In addressing this challenge, we propose a remote sensing-based approach to forecast the future habitat quality for European badger, a species not abundant and at risk of local extinction in the arid environments of southeastern Spain, by incorporating environmental variables related with the ecosystem functioning and correlated with climate and land use. Using ensemble prediction methods, we designed global spatial distribution models for the distribution range of badger using presence-only data and climate variables. Then, we constructed regional models for an arid region in the southeast Spain using EVI (Enhanced Vegetation Index) derived variables and weighting the pseudo-absences with the global model projections applied to this region. Finally, we forecast the badger potential spatial distribution in the time period 2071–2099 based on IPCC scenarios incorporating the uncertainty derived from the predicted values of EVI-derived variables. By including remotely sensed descriptors of the temporal dynamics and spatial patterns of ecosystem functioning into spatial distribution models, results suggest that future forecast is less favorable for European badgers than not including them. In addition, change in spatial pattern of habitat suitability may become higher than when forecasts are based just on climate variables. Since the validity of future forecast only based on climate variables is currently questioned, conservation policies supported by such information could have a biased vision and overestimate or underestimate the potential changes in species distribution derived from climate change. The incorporation of ecosystem functional attributes derived from remote sensing in the modeling of future forecast may contribute to the improvement of the detection of ecological responses under climate change scenarios. PMID:28257501
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
NASA Astrophysics Data System (ADS)
Davidson, Eric; Sihi, Debjani; Savage, Kathleen
2017-04-01
Soil fluxes of greenhouse gases (GHGs) play a significant role as biotic feedbacks to climate change. Production and consumption of carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) are affected by complex interactions of temperature, moisture, and substrate supply, which are further complicated by spatial heterogeneity of the soil matrix. Models of belowground processes of these GHGs should be internally consistent with respect to the biophysical processes of gaseous production, consumption, and transport within the soil, including the contrasting effects of oxygen (O2) as either substrate or inhibitor. We installed automated chambers to simultaneously measure soil fluxes of CO2 (using LiCor-IRGA), CH4, and N2O (using Aerodyne quantum cascade laser) along soil moisture gradients at the Howland Forest in Maine, USA. Measured fluxes of these GHGs were used to develop and validate a merged model. While originally intended for aerobic respiration, the core structure of the Dual Arrhenius and Michaelis-Menten (DAMM) model was modified by adding M-M and Arrhenius functions for each GHG production and consumption process, and then using the same diffusion functions for each GHG and for O2. The area under a soil chamber was partitioned according to a log-normal probability distribution function, where only a small fraction of microsites had high available-C. The probability distribution of soil C leads to a simulated distribution of heterotrophic respiration, which translates to a distribution of O2 consumption among microsites. Linking microsite consumption of O2 with a diffusion model generates microsite concentrations of O2, which then determine the distribution of microsite production and consumption of CH4 and N2O, and subsequently their microsite concentrations using the same diffusion function. At many moisture values, there are some microsites of production and some of consumption for each gas, and the resulting simulated microsite concentrations of CH4 and N2O range from below ambient to above ambient atmospheric values. As soil moisture or temperature increase, the skewness of the microsite distributions of heterotrophic respiration and CH4 concentrations shifts toward a larger fraction of high values, while the skewness of microsite distributions of O2 and N2O concentrations shifts toward a larger fraction of low values. This approach of probability distribution functions for each gas simulates the importance of microsite hotspots of methanogenesis and N2O reduction at high moisture (and temperature). In addition, the model demonstrates that net consumption of atmospheric CH4 and N2O can occur simultaneously within a chamber due to the distribution of soil microsite conditions, which is consistent with some episodes of measured fluxes. Because soil CO2, N2O and CH4 fluxes are linked through substrate supply and O2 effects, the multiple constraints of simultaneous measurements of all three GHGs proved to be effective when applied to our combined model. Simulating all three GHGs simultaneously in a parsimonious modeling framework provides confidence that the most important mechanisms are skillfully simulated using appropriate parameterization and good process representation.
Cowell, Robert G
2018-05-04
Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.
A decentralized mechanism for improving the functional robustness of distribution networks.
Shi, Benyun; Liu, Jiming
2012-10-01
Most real-world distribution systems can be modeled as distribution networks, where a commodity can flow from source nodes to sink nodes through junction nodes. One of the fundamental characteristics of distribution networks is the functional robustness, which reflects the ability of maintaining its function in the face of internal or external disruptions. In view of the fact that most distribution networks do not have any centralized control mechanisms, we consider the problem of how to improve the functional robustness in a decentralized way. To achieve this goal, we study two important problems: 1) how to formally measure the functional robustness, and 2) how to improve the functional robustness of a network based on the local interaction of its nodes. First, we derive a utility function in terms of network entropy to characterize the functional robustness of a distribution network. Second, we propose a decentralized network pricing mechanism, where each node need only communicate with its distribution neighbors by sending a "price" signal to its upstream neighbors and receiving "price" signals from its downstream neighbors. By doing so, each node can determine its outflows by maximizing its own payoff function. Our mathematical analysis shows that the decentralized pricing mechanism can produce results equivalent to those of an ideal centralized maximization with complete information. Finally, to demonstrate the properties of our mechanism, we carry out a case study on the U.S. natural gas distribution network. The results validate the convergence and effectiveness of our mechanism when comparing it with an existing algorithm.
NASA Astrophysics Data System (ADS)
Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.
2018-04-01
We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Characterization of Cloud Water-Content Distribution
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2010-01-01
The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.
Temperature distribution in the human body under various conditions of induced hyperthermia
NASA Technical Reports Server (NTRS)
Korobko, O. V.; Perelman, T. L.; Fradkin, S. Z.
1977-01-01
A mathematical model based on heat balance equations was developed for studying temperature distribution in the human body under deep hyperthermia which is often induced in the treatment of malignant tumors. The model yields results which are in satisfactory agreement with experimental data. The distribution of temperature under various conditions of induced hyperthermia, i.e. as a function of water temperature and supply rate, is examined on the basis of temperature distribution curves in various body zones.
NASA Astrophysics Data System (ADS)
Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badelek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I. G.; Blum, D.; Böhm, E.; De Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S. C.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffre, M.; Jacholkowska, A.; Janata, F.; Jancso, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, A.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettingale, J.; Pietrzyk, B.; Poensgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Scholz, M.; Schouten, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thenard, J. M.; Thompson, J. C.; De la Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.; European Muon Collaboration
1989-07-01
A new determination of the u valence quark distribution function in the proton is obtained from the analysis of identified charged pions, kaons, protons and antiprotons produced in muon-proton and muon-deuteron scattering. The comparison with results obtained in inclusive deep inelastic lepton-nucleon scattering provides a further test of the quark-parton model. The u quark fragmentation functions into positive and negative pions, kaons, protons and antiprotons are also measured.
The Urban Forest Effects (UFORE) model: quantifying urban forest structure and functions
David J. Nowak; Daniel E. Crane
2000-01-01
The Urban Forest Effects (UFORE) computer model was developed to help managers and researchers quantify urban forest structure and functions. The model quantifies species composition and diversity, diameter distribution, tree density and health, leaf area, leaf biomass, and other structural characteristics; hourly volatile organic compound emissions (emissions that...
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
The Cluster Variation Method: A Primer for Neuroscientists.
Maren, Alianna J
2016-09-30
Effective Brain-Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables , is defined in terms of a single interaction enthalpy parameter ( h ) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.
The Cluster Variation Method: A Primer for Neuroscientists
Maren, Alianna J.
2016-01-01
Effective Brain–Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found. PMID:27706022
NASA Technical Reports Server (NTRS)
Goldhirsh, Julius; Gebo, Norman; Rowland, John
1988-01-01
In this effort are described cumulative rain rate distributions for a network of nine tipping bucket rain gauge systems located in the mid-Atlantic coast region in the vicinity of the NASA Wallops Flight Facility, Wallops Island, Virginia. The rain gauges are situated within a gridded region of dimensions of 47 km east-west by 70 km north-south. Distributions are presented for the individual site measurements and the network average for the year period June 1, 1986 through May 31, 1987. A previous six year average distribution derived from measurements at one of the site locations is also presented. Comparisons are given of the network average, the CCIR (International Radio Consultative Committee) climatic zone, and the CCIR functional model distributions, the latter of which approximates a log normal at the lower rain rate and a gamma function at the higher rates.
Wesolowski, David J.; Wang, Hsiu -Wen; Page, Katharine L.; ...
2015-12-08
MXenes are a recently discovered family of two-dimensional (2D) early transition metal carbides and carbonitrides, which have already shown many attractive properties and great promise in energy storage and many other applications. But, a complex surface chemistry and small coherence length have been obstacles in some applications of MXenes, also limiting the accuracy of predictions of their properties. In this study, we describe and benchmark a novel way of modeling layered materials with real interfaces (diverse surface functional groups and stacking order between the adjacent monolayers) against experimental data. The structures of three kinds of Ti 3C 2T x MXenesmore » (T stands for surface terminating species, including O, OH, and F) produced under different synthesis conditions were resolved for the first time using atomic pair distribution function obtained by high-quality neutron total scattering. We present the true nature of the material can be easily captured with the sensitivity of neutron scattering to the surface species of interest and the detailed “third-generation” structure model. The modeling approach leads to new understanding of MXene structural properties and can replace the currently used idealized models in predictions of a variety of physical, chemical, and functional properties of Ti 3C 2-based MXenes. Moreover, the developed models can be employed to guide the design of new MXene materials with selected surface termination and controlled contact angle, catalytic, optical, electrochemical, and other properties. Finally, we suggest that the multilevel structural modeling should form the basis for a generalized methodology on modeling diffraction and pair distribution function data for 2D and layered materials.« less
NASA Astrophysics Data System (ADS)
Bernhard, E.; Mullaney, J. R.; Aird, J.; Hickox, R. C.; Jones, M. L.; Stanley, F.; Grimmett, L. P.; Daddi, E.
2018-05-01
The lack of a strong correlation between AGN X-ray luminosity (LX; a proxy for AGN power) and the star formation rate (SFR) of their host galaxies has recently been attributed to stochastic AGN variability. Studies using population synthesis models have incorporated this by assuming a broad, universal (i.e. does not depend on the host galaxy properties) probability distribution for AGN specific X-ray luminosities (i.e. the ratio of LX to host stellar mass; a common proxy for Eddington ratio). However, recent studies have demonstrated that this universal Eddington ratio distribution fails to reproduce the observed X-ray luminosity functions beyond z ˜ 1.2. Furthermore, empirical studies have recently shown that the Eddington ratio distribution may instead depend upon host galaxy properties, such as SFR and/or stellar mass. To investigate this further, we develop a population synthesis model in which the Eddington ratio distribution is different for star-forming and quiescent host galaxies. We show that, although this model is able to reproduce the observed X-ray luminosity functions out to z ˜ 2, it fails to simultaneously reproduce the observed flat relationship between SFR and X-ray luminosity. We can solve this, however, by incorporating a mass dependency in the AGN Eddington ratio distribution for star-forming host galaxies. Overall, our models indicate that a relative suppression of low Eddington ratios (λEdd ≲ 0.1) in lower mass galaxies (M* ≲ 1010 - 11 M⊙) is required to reproduce both the observed X-ray luminosity functions and the observed flat SFR/X-ray relationship.
NASA Astrophysics Data System (ADS)
Malkin, B. Z.; Abishev, N. M.; Baibekov, E. I.; Pytalev, D. S.; Boldyrev, K. N.; Popova, M. N.; Bettinelli, M.
2017-07-01
We construct a distribution function of the strain-tensor components induced by point defects in an elastically anisotropic continuum, which can be used to account quantitatively for many effects observed in different branches of condensed matter physics. Parameters of the derived six-dimensional generalized Lorentz distribution are expressed through the integrals computed over the array of strains. The distribution functions for the cubic diamond and elpasolite crystals and tetragonal crystals with the zircon and scheelite structures are presented. Our theoretical approach is supported by a successful modeling of specific line shapes of singlet-doublet transitions of the T m3 + ions doped into AB O4 (A =Y , Lu; B =P , V) crystals with zircon structure, observed in high-resolution optical spectra. The values of the defect strengths of impurity T m3 + ions in the oxygen surroundings, obtained as a result of this modeling, can be used in future studies of random strains in different rare-earth oxides.
NASA Technical Reports Server (NTRS)
El-Alaoui, M.; Ashour-Abdalla, M.; Raeder, J.; Peroomian, V.; Frank, L. A.; Paterson, W. R.; Bosqued, J. M.
1998-01-01
On February 9, 1995, the Comprehensive Plasma Instrumentation (CPI) on the Geotail spacecraft observed a complex, structured ion distribution function near the magnetotail midplane at x approximately -30 R(sub E). On this same day the Wind spacecraft observed a quiet solar wind and an interplanetary magnetic field (IMF) that was northward for more than five hours, and an IMF B(sub y) component with a magnitude comparable to that of the RAF B(sub z) component. In this study, we determined the sources of the ions in this distribution function by following approximately 90,000 ion trajectories backward in time, using the time-dependent electric and magnetic fields obtained from a global MHD simulation. The Wind observations were used as input for the MHD model. The ion distribution function observed by Geotail at 1347 UT was found to consist primarily of particles from the dawn side low latitude boundary layer (LLBL) and from the dusk side LLBL; fewer than 2% of the particles originated in the ionosphere.
NASA Technical Reports Server (NTRS)
Querci, F.; Kunde, V. G.; Querci, M.
1971-01-01
The basis and techniques are presented for generating opacity probability distribution functions for the CN molecule (red and violet systems) and the C2 molecule (Swan, Phillips, Ballik-Ramsay systems), two of the more important diatomic molecules in the spectra of carbon stars, with a view to including these distribution functions in equilibrium model atmosphere calculations. Comparisons to the CO molecule are also shown. T he computation of the monochromatic absorption coefficient uses the most recent molecular data with revision of the oscillator strengths for some of the band systems. The total molecular stellar mass absorption coefficient is established through fifteen equations of molecular dissociation equilibrium to relate the distribution functions to each other on a per gram of stellar material basis.
Energy and enthalpy distribution functions for a few physical systems.
Wu, K L; Wei, J H; Lai, S K; Okabe, Y
2007-08-02
The present work is devoted to extracting the energy or enthalpy distribution function of a physical system from the moments of the distribution using the maximum entropy method. This distribution theory has the salient traits that it utilizes only the experimental thermodynamic data. The calculated distribution functions provide invaluable insight into the state or phase behavior of the physical systems under study. As concrete evidence, we demonstrate the elegance of the distribution theory by studying first a test case of a two-dimensional six-state Potts model for which simulation results are available for comparison, then the biphasic behavior of the binary alloy Na-K whose excess heat capacity, experimentally observed to fall in a narrow temperature range, has yet to be clarified theoretically, and finally, the thermally induced state behavior of a collection of 16 proteins.
Statistics of the geomagnetic secular variation for the past 5Ma
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1986-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
Statistics of the geomagnetic secular variation for the past 5 m.y
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1988-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
Prospects of second generation artificial intelligence tools in calibration of chemical sensors.
Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala
2005-05-01
Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.
NASA Astrophysics Data System (ADS)
Zainudin, W. N. R. A.; Ramli, N. A.
2017-09-01
In 2010, Energy Commission (EC) had introduced Incentive Based Regulation (IBR) to ensure sustainable Malaysian Electricity Supply Industry (MESI), promotes transparent and fair returns, encourage maximum efficiency and maintains policy driven end user tariff. To cater such revolutionary transformation, a sophisticated system to generate policy driven electricity tariff structure is in great need. Hence, this study presents a data analytics framework that generates altered revenue function based on varying power consumption distribution and tariff charge function. For the purpose of this study, the power consumption distribution is being proxy using proportion of household consumption and electricity consumed in KwH and the tariff charge function is being proxy using three-tiered increasing block tariff (IBT). The altered revenue function is useful to give an indication on whether any changes in the power consumption distribution and tariff charges will give positive or negative impact to the economy. The methodology used for this framework begins by defining the revenue to be a function of power consumption distribution and tariff charge function. Then, the proportion of household consumption and tariff charge function is derived within certain interval of electricity power. Any changes in those proportion are conjectured to contribute towards changes in revenue function. Thus, these changes can potentially give an indication on whether the changes in power consumption distribution and tariff charge function are giving positive or negative impact on TNB revenue. Based on the finding of this study, major changes on tariff charge function seems to affect altered revenue function more than power consumption distribution. However, the paper concludes that power consumption distribution and tariff charge function can influence TNB revenue to some great extent.
Using maximum topology matching to explore differences in species distribution models
Poco, Jorge; Doraiswamy, Harish; Talbert, Marian; Morisette, Jeffrey; Silva, Claudio
2015-01-01
Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.
Panda, Subhamay; Kumari, Leena
2017-01-01
Serine proteases are a group of enzymes that hydrolyses the peptide bonds in proteins. In mammals, these enzymes help in the regulation of several major physiological functions such as digestion, blood clotting, responses of immune system, reproductive functions and the complement system. Serine proteases obtained from the venom of Octopodidae family is a relatively unexplored area of research. In the present work, we tried to effectively utilize comparative composite molecular modeling technique. Our key aim was to propose the first molecular model structure of unexplored serine protease 5 derived from big blue octopus. The other objective of this study was to analyze the distribution of negatively and positively charged amino acid over molecular modeled structure, distribution of secondary structural elements, hydrophobicity molecular surface analysis and electrostatic potential analysis with the aid of different bioinformatic tools. In the present study, molecular model has been generated with the help of I-TASSER suite. Afterwards the refined structural model was validated with standard methods. For functional annotation of protein molecule we used Protein Information Resource (PIR) database. Serine protease 5 of big blue octopus was analyzed with different bioinformatical algorithms for the distribution of negatively and positively charged amino acid over molecular modeled structure, distribution of secondary structural elements, hydrophobicity molecular surface analysis and electrostatic potential analysis. The functionally critical amino acids and ligand- binding site (LBS) of the proteins (modeled) were determined using the COACH program. The molecular model data in cooperation to other pertinent post model analysis data put forward molecular insight to proteolytic activity of serine protease 5, which helps in the clear understanding of procoagulant and anticoagulant characteristics of this natural lead molecule. Our approach was to investigate the octopus venom protein as a whole or a part of their structure that may result in the development of new lead molecule. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Normal theory procedures for calculating upper confidence limits (UCL) on the risk function for continuous responses work well when the data come from a normal distribution. However, if the data come from an alternative distribution, the application of the normal theory procedure...
ERIC Educational Resources Information Center
Sung, Kyongje
2008-01-01
Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…
Lee II, Henry; Reusser, Deborah A.; Frazier, Melanie R; McCoy, Lee M; Clinton, Patrick J.; Clough, Jonathan S.
2014-01-01
The “Sea‐Level Affecting Marshes Model” (SLAMM) is a moderate resolution model used to predict the effects of sea level rise on marsh habitats (Craft et al. 2009). SLAMM has been used extensively on both the west coast (e.g., Glick et al., 2007) and east coast (e.g., Geselbracht et al., 2011) of the United States to evaluate potential changes in the distribution and extent of tidal marsh habitats. However, a limitation of the current version of SLAMM, (Version 6.2) is that it lacks the ability to model distribution changes in seagrass habitat resulting from sea level rise. Because of the ecological importance of SAV habitats, U.S. EPA, USGS, and USDA partnered with Warren Pinnacle Consulting to enhance the SLAMM modeling software to include new functionality in order to predict changes in Zostera marina distribution within Pacific Northwest estuaries in response to sea level rise. Specifically, the objective was to develop a SAV model that used generally available GIS data and parameters that were predictive and that could be customized for other estuaries that have GIS layers of existing SAV distribution. This report describes the procedure used to develop the SAV model for the Yaquina Bay Estuary, Oregon, appends a statistical script based on the open source R software to generate a similar SAV model for other estuaries that have data layers of existing SAV, and describes how to incorporate the model coefficients from the site‐specific SAV model into SLAMM to predict the effects of sea level rise on Zostera marina distributions. To demonstrate the applicability of the R tools, we utilize them to develop model coefficients for Willapa Bay, Washington using site‐specific SAV data.
A simulator for evaluating methods for the detection of lesion-deficit associations
NASA Technical Reports Server (NTRS)
Megalooikonomou, V.; Davatzikos, C.; Herskovits, E. H.
2000-01-01
Although much has been learned about the functional organization of the human brain through lesion-deficit analysis, the variety of statistical and image-processing methods developed for this purpose precludes a closed-form analysis of the statistical power of these systems. Therefore, we developed a lesion-deficit simulator (LDS), which generates artificial subjects, each of which consists of a set of functional deficits, and a brain image with lesions; the deficits and lesions conform to predefined distributions. We used probability distributions to model the number, sizes, and spatial distribution of lesions, to model the structure-function associations, and to model registration error. We used the LDS to evaluate, as examples, the effects of the complexities and strengths of lesion-deficit associations, and of registration error, on the power of lesion-deficit analysis. We measured the numbers of recovered associations from these simulated data, as a function of the number of subjects analyzed, the strengths and number of associations in the statistical model, the number of structures associated with a particular function, and the prior probabilities of structures being abnormal. The number of subjects required to recover the simulated lesion-deficit associations was found to have an inverse relationship to the strength of associations, and to the smallest probability in the structure-function model. The number of structures associated with a particular function (i.e., the complexity of associations) had a much greater effect on the performance of the analysis method than did the total number of associations. We also found that registration error of 5 mm or less reduces the number of associations discovered by approximately 13% compared to perfect registration. The LDS provides a flexible framework for evaluating many aspects of lesion-deficit analysis.
NASA Astrophysics Data System (ADS)
Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.
2018-01-01
This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.
Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe
2016-01-01
Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young's modulus values. For each combination of these variables, the explicit equation of the porosity distribution law-i.e the law that describes the pore dimensions in function of the spatial coordinates-was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry of functionally graded scaffolds based on mechanobiological criteria.
NASA Astrophysics Data System (ADS)
Senshu, H.; Kimura, H.; Yamamoto, T.; Wada, K.; Kobayashi, M.; Namiki, N.; Matsui, T.
2015-10-01
The velocity distribution function of photoelectrons from a surface exposed to solar UV radiation is fundamental to the electrostatic status of the surface. There is one and only one laboratory measurement of photoelectron emission from astronomically relevant material, but the energy distribution function was measured only in the emission angle from the normal to the surface of 0 to about π / 4. Therefore, the measured distribution is not directly usable to estimate the vertical structure of a photoelectric sheath above the surface. In this study, we develop a new analytical method to calculate an angle-resolved velocity distribution function of photoelectrons from the laboratory measurement data. We find that the photoelectric current and yield for lunar surface fines measured in a laboratory have been underestimated by a factor of two. We apply our new energy distribution function of photoelectrons to model the formation of photoelectric sheath above the surface of asteroid 433 Eros. Our model shows that a 0.1 μm-radius dust grain can librate above the surface of asteroid 433 Eros regardless of its launching velocity. In addition, a 0.5 μm grain can hover over the surface if the grain was launched at a velocity slower than 0.4 m/s, which is a more stringent condition for levitation than previous studies. However, a lack of high-energy data on the photoelectron energy distribution above 6 eV prevents us from firmly placing a constraint on the levitation condition.
Effects of molecular and particle scatterings on the model parameter for remote-sensing reflectance.
Lee, ZhongPing; Carder, Kendall L; Du, KePing
2004-09-01
For optically deep waters, remote-sensing reflectance (r(rs)) is traditionally expressed as the ratio of the backscattering coefficient (b(b)) to the sum of absorption and backscattering coefficients (a + b(b)) that multiples a model parameter (g, or the so-called f'/Q). Parameter g is further expressed as a function of b(b)/(a + b(b)) (or b(b)/a) to account for its variation that is due to multiple scattering. With such an approach, the same g value will be derived for different a and b(b) values that provide the same ratio. Because g is partially a measure of the angular distribution of upwelling light, and the angular distribution from molecular scattering is quite different from that of particle scattering; g values are expected to vary with different scattering distributions even if the b(b)/a ratios are the same. In this study, after numerically demonstrating the effects of molecular and particle scatterings on the values of g, an innovative r(rs) model is developed. This new model expresses r(rs) in two separate terms: one governed by the phase function of molecular scattering and one governed by the phase function of particle scattering, with a model parameter introduced for each term. In this way the phase function effects from molecular and particle scatterings are explicitly separated and accounted for. This new model provides an analytical tool to understand and quantify the phase-function effects on r(rs), and a platform to calculate r(rs) spectrum quickly and accurately that is required for remote-sensing applications.
The Generation, Radiation and Prediction of Supersonic Jet Noise. Volume 1
1978-10-01
standard, Gaussian correlation function model can yield a good noise spectrum prediction (at 900), but the corresponding axial source distributions do not...forms for the turbulence cross-correlation function. Good agreement was obtained between measured and calculated far- field noise spectra. However, the...complementary error function profile (3.63) was found to provide a good fit to the axial velocity distribution tor a wide range of Mach numbers in the Initial
Turbulent Equilibria for Charged Particles in Space
NASA Astrophysics Data System (ADS)
Yoon, Peter
2017-04-01
The solar wind electron distribution function is apparently composed of several components including non-thermal tail population. The electron distribution that contains energetic tail feature is well fitted with the kappa distribution function. The solar wind protons also possess quasi power-law tail distribution function that is well fitted with an inverse power law model. The present paper discusses the latest theoretical development regarding the dynamical steady-state solution of electrons and Langmuir turbulence that are in turbulent equilibrium. According to such a theory, the Maxwellian and kappa distribution functions for the electrons emerge as the only two possible solution that satisfy the steady-state weak turbulence plasma kinetic equation. For the proton inverse power-law tail problem, a similar turbulent equilibrium solution can be conceived of, but instead of high-frequency Langmuir fluctuation, the theory involves low-frequency kinetic Alfvenic turbulence. The steady-state solution of the self-consistent proton kinetic equation and wave kinetic equation for Alfvenic waves can be found in order to obtain a self-consistent solution for the inverse power law tail distribution function.
Ferguson, Sue A.; Allread, W. Gary; Burr, Deborah L.; Heaney, Catherine; Marras, William S.
2013-01-01
Background Biomechanical, psychosocial and individual risk factors for low back disorder have been studied extensively however few researchers have examined all three risk factors. The objective of this was to develop a low back disorder risk model in furniture distribution workers using biomechanical, psychosocial and individual risk factors. Methods This was a prospective study with a six month follow-up time. There were 454 subjects at 9 furniture distribution facilities enrolled in the study. Biomechanical exposure was evaluated using the American Conference of Governmental Industrial Hygienists (2001) lifting threshold limit values for low back injury risk. Psychosocial and individual risk factors were evaluated via questionnaires. Low back health functional status was measured using the lumbar motion monitor. Low back disorder cases were defined as a loss of low back functional performance of −0.14 or more. Findings There were 92 cases of meaningful loss in low back functional performance and 185 non cases. A multivariate logistic regression model included baseline functional performance probability, facility, perceived workload, intermediated reach distance number of exertions above threshold limit values, job tenure manual material handling, and age combined to provide a model sensitivity of 68.5% and specificity of 71.9%. Interpretation: The results of this study indicate which biomechanical, individual and psychosocial risk factors are important as well as how much of each risk factor is too much resulting in increased risk of low back disorder among furniture distribution workers. PMID:21955915
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Effect of noise on defect chaos in a reaction-diffusion model.
Wang, Hongli; Ouyang, Qi
2005-06-01
The influence of noise on defect chaos due to breakup of spiral waves through Doppler and Eckhaus instabilities is investigated numerically with a modified Fitzhugh-Nagumo model. By numerical simulations we show that the noise can drastically enhance the creation and annihilation rates of topological defects. The noise-free probability distribution function for defects in this model is found not to fit with the previously reported squared-Poisson distribution. Under the influence of noise, the distributions are flattened, and can fit with the squared-Poisson or the modified-Poisson distribution. The defect lifetime and diffusive property of defects under the influence of noise are also checked in this model.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.; Johnson, D.; Remer, L.
2004-01-01
Cloud microphysics is inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensembel (GCE) model. The formulation for the explicit spectral-bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e. pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e. 33 bins). Atmospheric aerosols are also described using number density size distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region and in the mid-latitude continent with different concentrations of CCN: a low "c1ean"concentration and a high "dirty" concentration. In addition, differences and similarities between bulk microphysics and spectral-bin microphysical schemes will be examined and discussed.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.; Johnson, D.; Remer, L.
2004-01-01
Cloud microphysics is inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, r d a U production, and rainfall rates for convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensembe1 (GCE) model. The formulation for the explicit spectral-bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e. pristine ice crystals (columnar and platelike), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e. 33 bins). Atmospheric aerosols are also described using number density size-distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region and in the mid-latitude continent with different concentrations of CCN: a low "c1ean"concentration and a high "dirty" concentration. In addition, differences and similarities between bulk microphysics and spectral-bin microphysical schemes will be examined and discussed.
Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.
NASA Astrophysics Data System (ADS)
Maheras, Steven James
Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.
NASA Astrophysics Data System (ADS)
Basu, A.; Das, B.; Middya, T. R.; Bhattacharya, D. P.
2017-01-01
The phonon growth characteristic in a degenerate semiconductor has been calculated under the condition of low temperature. If the lattice temperature is high, the energy of the intravalley acoustic phonon is negligibly small compared to the average thermal energy of the electrons. Hence one can traditionally assume the electron-phonon collisions to be elastic and approximate the Bose-Einstein (B.E.) distribution for the phonons by the simple equipartition law. However, in the present analysis at the low lattice temperatures, the interaction of the non equilibrium electrons with the acoustic phonons becomes inelastic and the simple equipartition law for the phonon distribution is not valid. Hence the analysis is made taking into account the inelastic collisions and the complete form of the B.E. distribution. The high-field distribution function of the carriers given by Fermi-Dirac (F.D.) function at the field dependent carrier temperature, has been approximated by a well tested model that apparently overcomes the intrinsic problem of correct evaluation of the integrals involving the product and powers of the Fermi function. Hence the results thus obtained are more reliable compared to the rough estimation that one may obtain from using the exact F.D. function, but taking recourse to some over simplified approximations.
ON A POSSIBLE SIZE/COLOR RELATIONSHIP IN THE KUIPER BELT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, R. E.; Kavelaars, J. J., E-mail: repike@uvic.ca
2013-10-01
Color measurements and albedo distributions introduce non-intuitive observational biases in size-color relationships among Kuiper Belt Objects (KBOs) that cannot be disentangled without a well characterized sample population with systematic photometry. Peixinho et al. report that the form of the KBO color distribution varies with absolute magnitude, H. However, Tegler et al. find that KBO color distributions are a property of object classification. We construct synthetic models of observed KBO colors based on two B-R color distribution scenarios: color distribution dependent on H magnitude (H-Model) and color distribution based on object classification (Class-Model). These synthetic B-R color distributions were modified tomore » account for observational flux biases. We compare our synthetic B-R distributions to the observed ''Hot'' and ''Cold'' detected objects from the Canada-France Ecliptic Plane Survey and the Meudon Multicolor Survey. For both surveys, the Hot population color distribution rejects the H-Model, but is well described by the Class-Model. The Cold objects reject the H-Model, but the Class-Model (while not statistically rejected) also does not provide a compelling match for data. Although we formally reject models where the structure of the color distribution is a strong function of H magnitude, we also do not find that a simple dependence of color distribution on orbit classification is sufficient to describe the color distribution of classical KBOs.« less
Effects of Acids, Bases, and Heteroatoms on Proximal Radial Distribution Functions for Proteins.
Nguyen, Bao Linh; Pettitt, B Montgomery
2015-04-14
The proximal distribution of water around proteins is a convenient method of quantifying solvation. We consider the effect of charged and sulfur-containing amino acid side-chain atoms on the proximal radial distribution function (pRDF) of water molecules around proteins using side-chain analogs. The pRDF represents the relative probability of finding any solvent molecule at a distance from the closest or surface perpendicular protein atom. We consider the near-neighbor distribution. Previously, pRDFs were shown to be universal descriptors of the water molecules around C, N, and O atom types across hundreds of globular proteins. Using averaged pRDFs, a solvent density around any globular protein can be reconstructed with controllable relative error. Solvent reconstruction using the additional information from charged amino acid side-chain atom types from both small models and protein averages reveals the effects of surface charge distribution on solvent density and improves the reconstruction errors relative to simulation. Solvent density reconstructions from the small-molecule models are as effective and less computationally demanding than reconstructions from full macromolecular models in reproducing preferred hydration sites and solvent density fluctuations.
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Cudeck, Robert
2009-01-01
A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Kaufmann, Eric; Levene, Mark; Loizou, George
Human dynamics and sociophysics suggest statistical models that may explain and provide us with better insight into social phenomena. Contextual and selection effects tend to produce extreme values in the tails of rank-ordered distributions of both census data and district-level election outcomes. Models that account for this nonlinearity generally outperform linear models. Fitting nonlinear functions based on rank-ordering census and election data therefore improves the fit of aggregate voting models. This may help improve ecological inference, as well as election forecasting in majoritarian systems. We propose a generative multiplicative decrease model that gives rise to a rank-order distribution and facilitates the analysis of the recent UK EU referendum results. We supply empirical evidence that the beta-like survival function, which can be generated directly from our model, is a close fit to the referendum results, and also may have predictive value when covariate data are available.
QCD-inspired spectra from Blue's functions
NASA Astrophysics Data System (ADS)
Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail
1996-02-01
We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.
NASA Astrophysics Data System (ADS)
Wang, L.; Kerr, L. A.; Bridger, E.
2016-02-01
Changes in species distributions have been widely associated with climate change. Understanding how ocean temperatures influence species distributions is critical for elucidating the role of climate in ecosystem change as well as for forecasting how species may be distributed in the future. As such, species distribution modeling (SDM) is increasingly useful in marine ecosystems research, as it can enable estimation of the likelihood of encountering marine fish in space or time as a function of a set of environmental and ecosystem conditions. Many traditional SDM approaches are applied to species data collected through standardized methods that include both presence and absence records, but are incapable of using presence-only data, such as those collected from fisheries or through citizen science programs. Maximum entropy (MaxEnt) models provide promising tools as they can predict species distributions from incomplete information (presence-only data). We developed a MaxEnt framework to relate the occurrence records of several marine fish species (e.g. Atlantic herring, Atlantic mackerel, and butterfish) to environmental conditions. Environmental variables derived from remote sensing, such as monthly average sea surface temperature (SST), are matched with fish species data, and model results indicate the relative occurrence rate of the species as a function of the environmental variables. The results can be used to provide hindcasts of where species might have been in the past in relation to historical environmental conditions, nowcasts in relation to current conditions, and forecasts of future species distributions. In this presentation, we will assess the relative influence of several environmental factors on marine fish species distributions, and evaluate the effects of data coverage on these presence-only models. We will also discuss how the information from species distribution forecasts can support climate adaptation planning in marine fisheries.
NASA Astrophysics Data System (ADS)
Plegnière, Sabrina; Casper, Markus; Hecker, Benjamin; Müller-Fürstenberger, Georg
2014-05-01
The basis of many models to calculate and assess climate change and its consequences are annual means of temperature and precipitation. This method leads to many uncertainties especially at the regional or local level: the results are not realistic or too coarse. Particularly in agriculture, single events and the distribution of precipitation and temperature during the growing season have enormous influences on plant growth. Therefore, the temporal distribution of climate variables should not be ignored. To reach this goal, a high-resolution ecological-economic model was developed which combines a complex plant growth model (STICS) and an economic model. In this context, input data of the plant growth model are daily climate values for a specific climate station calculated by the statistical climate model (WETTREG). The economic model is deduced from the results of the plant growth model STICS. The chosen plant is corn because corn is often cultivated and used in many different ways. First of all, a sensitivity analysis showed that the plant growth model STICS is suitable to calculate the influences of different cultivation methods and climate on plant growth or yield as well as on soil fertility, e.g. by nitrate leaching, in a realistic way. Additional simulations helped to assess a production function that is the key element of the economic model. Thereby the problems when using mean values of temperature and precipitation in order to compute a production function by linear regression are pointed out. Several examples show why a linear regression to assess a production function based on mean climate values or smoothed natural distribution leads to imperfect results and why it is not possible to deduce a unique climate factor in the production function. One solution for this problem is the additional consideration of stress indices that show the impairment of plants by water or nitrate shortage. Thus, the resulting model takes into account not only the ecological factors (e.g. the plant growth) or the economical factors as a simple monetary calculation, but also their mutual influences. Finally, the ecological-economic model enables us to make a risk assessment or evaluate adaptation strategies.
Ion-Acoustic Double-Layers in Plasmas with Nonthermal Electrons
NASA Astrophysics Data System (ADS)
Rios, L. A.; Galvão, R. M. O.
2014-12-01
A double layer (DL) consists of a positive/negative Debye sheath, connecting two quasineutral regions of a plasma. These nonlinear structures can be found in a variety of plasmas, from discharge tubes to space plasmas. It has applications to plasma processing and space propulsion, and its concept is also important for areas such as applied geophysics. In the present work we investigate the ion-acoustic double-layers (IADLs). It is believed that these structures are responsible for the acceleration of auroral electrons, for example. The plasma distributions near a DL are usually non-Maxwellian and can be modeled via a κ distribution function. In its reduced form, the standard κ distribution is equivalent to the distribution function obtained from the maximization of the Tsallis entropy, the q distribution. The parameters κ and q measure the deviation from the Maxwellian equilibrium ("nonthermality"), with -κ=1/(1-q) (in the limit κ → ∞ (q → 1) the Maxwellian distribution is recovered). The existence of obliquely propagating IADLs in magnetized two-electron plasmas is investigated, with the hot electron population modeled via a κ distribution function [1]. Our analysis shows that only subsonic and rarefactive DLs exist for the entire range of parameters investigated. The small amplitude DLs exist only for τ=Th/Tc greater than a critical value, which grows as κ decreases. We also observe that these structures exist only for large values of δ=Nh0/N0, but never for δ=1. In our model, which assumes a quasineutral condition, the Mach number M grows as θ decreases (θ is the angle between the directions of the external magnetic field and wave propagation). However, M as well as the DL amplitude are reduced as a consequence of nonthermality. The relation of the quasineutral condition and the functional form of the distribution function with the nonexistence of IADLs has also been analyzed and some interesting results have been obtained. A more detailed discussion about this topic will be presented during the conference. References: [1] L. A. Rios and R. M. O. Galvão, Phys. Plasmas 20, 112301 (2013).
Dynamics of a stochastic HIV-1 infection model with logistic growth
NASA Astrophysics Data System (ADS)
Jiang, Daqing; Liu, Qun; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed; Xia, Peiyan
2017-03-01
This paper is concerned with a stochastic HIV-1 infection model with logistic growth. Firstly, by constructing suitable stochastic Lyapunov functions, we establish sufficient conditions for the existence of ergodic stationary distribution of the solution to the HIV-1 infection model. Then we obtain sufficient conditions for extinction of the infection. The stationary distribution shows that the infection can become persistent in vivo.
Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*
NASA Astrophysics Data System (ADS)
Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.
2016-04-01
The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.
NASA Astrophysics Data System (ADS)
Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran
2016-09-01
In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.
NASA Astrophysics Data System (ADS)
Li, Wenzhuo; Zhao, Yingying; Huang, Shuaiyu; Zhang, Song; Zhang, Lin
2017-01-01
This goal of this work was to develop a coarse-grained (CG) model of a β-O-4 type lignin polymer, because of the time consuming process required to achieve equilibrium for its atomistic model. The automatic adjustment method was used to develop the lignin CG model, which enables easy discrimination between chemically-varied polymers. In the process of building the lignin CG model, a sum of n Gaussian functions was obtained by an approximation of the corresponding atomistic potentials derived from a simple Boltzmann inversion of the distributions of the structural parameters. This allowed the establishment of the potential functions of the CG bond stretching and angular bending. To obtain the potential function of the CG dihedral angle, an algorithm similar to a Fourier progression form was employed together with a nonlinear curve-fitting method. The numerical potentials of the nonbonded portion of the lignin CG model were obtained using a potential inversion iterative method derived from the corresponding atomistic nonbonded distributions. The study results showed that the proposed CG model of lignin agreed well with its atomistic model in terms of the distributions of bond lengths, bending angles, dihedral angles and nonbonded distances between the CG beads. The lignin CG model also reproduced the static and dynamic properties of the atomistic model. The results of the comparative evaluation of the two models suggested that the designed lignin CG model was efficient and reliable.
Wigner functions for evanescent waves.
Petruccelli, Jonathan C; Tian, Lei; Oh, Se Baek; Barbastathis, George
2012-09-01
We propose phase space distributions, based on an extension of the Wigner distribution function, to describe fields of any state of coherence that contain evanescent components emitted into a half-space. The evanescent components of the field are described in an optical phase space of spatial position and complex-valued angle. Behavior of these distributions upon propagation is also considered, where the rapid decay of the evanescent components is associated with the exponential decay of the associated phase space distributions. To demonstrate the structure and behavior of these distributions, we consider the fields generated from total internal reflection of a Gaussian Schell-model beam at a planar interface.
Fan, Yuting; Li, Jianqiang; Xu, Kun; Chen, Hao; Lu, Xun; Dai, Yitang; Yin, Feifei; Ji, Yuefeng; Lin, Jintong
2013-09-09
In this paper, we analyze the performance of IEEE 802.11 distributed coordination function in simulcast radio-over-fiber-based distributed antenna systems (RoF-DASs) where multiple remote antenna units (RAUs) are connected to one wireless local-area network (WLAN) access point (AP) with different-length fiber links. We also present an analytical model to evaluate the throughput of the systems in the presence of both the inter-RAU hidden-node problem and fiber-length difference effect. In the model, the unequal delay induced by different fiber length is involved both in the backoff stage and in the calculation of Ts and Tc, which are the period of time when the channel is sensed busy due to a successful transmission or a collision. The throughput performances of WLAN-RoF-DAS in both basic access and request to send/clear to send (RTS/CTS) exchange modes are evaluated with the help of the derived model.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.; Tubbs, J. D.
1982-01-01
A five-parameter gamma distribution (BGD) having two shape parameters, two location parameters, and a correlation parameter is investigated. This general BGD is expressed as a double series and as a single series of the modified Bessel function. It reduces to the known special case for equal shape parameters. Practical functions for computer evaluations for the general BGD and for special cases are presented. Applications to wind gust modeling for the ascent flight of the space shuttle are illustrated.
Two-component scattering model and the electron density spectrum
NASA Astrophysics Data System (ADS)
Zhou, A. Z.; Tan, J. Y.; Esamdin, A.; Wu, X. J.
2010-02-01
In this paper, we discuss a rigorous treatment of the refractive scintillation caused by a two-component interstellar scattering medium and a Kolmogorov form of density spectrum. It is assumed that the interstellar scattering medium is composed of a thin-screen interstellar medium (ISM) and an extended interstellar medium. We consider the case that the scattering of the thin screen concentrates in a thin layer represented by a δ function distribution and that the scattering density of the extended irregular medium satisfies the Gaussian distribution. We investigate and develop equations for the flux density structure function corresponding to this two-component ISM geometry in the scattering density distribution and compare our result with the observations. We conclude that the refractive scintillation caused by this two-component ISM scattering gives a more satisfactory explanation for the observed flux density variation than does the single extended medium model. The level of refractive scintillation is strongly sensitive to the distribution of scattering material along the line of sight (LOS). The theoretical modulation indices are comparatively less sensitive to the scattering strength of the thin-screen medium, but they critically depend on the distance from the observer to the thin screen. The logarithmic slope of the structure function is sensitive to the scattering strength of the thin-screen medium, but is relatively insensitive to the thin-screen location. Therefore, the proposed model can be applied to interpret the structure functions of flux density observed in pulsar PSR B2111 + 46 and PSR B0136 + 57. The result suggests that the medium consists of a discontinuous distribution of plasma turbulence embedded in the interstellar medium. Thus our work provides some insight into the distribution of the scattering along the LOS to the pulsar PSR B2111 + 46 and PSR B0136 + 57.
3D glasma initial state for relativistic heavy ion collisions
Schenke, Björn; Schlichting, Sören
2016-10-13
We extend the impact-parameter-dependent Glasma model to three dimensions using explicit small-x evolution of the two incoming nuclear gluon distributions. We compute rapidity distributions of produced gluons and the early-time energy momentum tensor as a function of space-time rapidity and transverse coordinates. Finally, we study rapidity correlations and fluctuations of the initial geometry and multiplicity distributions and make comparisons to existing models for the three-dimensional initial state.
Radial basis function and its application in tourism management
NASA Astrophysics Data System (ADS)
Hu, Shan-Feng; Zhu, Hong-Bin; Zhao, Lei
2018-05-01
In this work, several applications and the performances of the radial basis function (RBF) are briefly reviewed at first. After that, the binomial function combined with three different RBFs including the multiquadric (MQ), inverse quadric (IQ) and inverse multiquadric (IMQ) distributions are adopted to model the tourism data of Huangshan in China. Simulation results showed that all the models match very well with the sample data. It is found that among the three models, the IMQ-RBF model is more suitable for forecasting the tourist flow.
NASA Astrophysics Data System (ADS)
Li, Hanshan
2016-04-01
To enhance the stability and reliability of multi-screens testing system, this paper studies multi-screens target optical information transmission link properties and performance in long-distance, sets up the discrete multi-tone modulation transmission model based on geometric model of laser multi-screens testing system and visible light information communication principle; analyzes the electro-optic and photoelectric conversion function of sender and receiver in target optical information communication system; researches target information transmission performance and transfer function of the generalized visible-light communication channel; found optical information communication transmission link light intensity space distribution model and distribution function; derives the SNR model of information transmission communication system. Through the calculation and experiment analysis, the results show that the transmission error rate increases with the increment of transmission rate in a certain channel modulation depth; when selecting the appropriate transmission rate, the bit error rate reach 0.01.
Butler, Samuel D; Nauyoks, Stephen E; Marciniak, Michael A
2015-06-01
Of the many classes of bidirectional reflectance distribution function (BRDF) models, two popular classes of models are the microfacet model and the linear systems diffraction model. The microfacet model has the benefit of speed and simplicity, as it uses geometric optics approximations, while linear systems theory uses a diffraction approach to compute the BRDF, at the expense of greater computational complexity. In this Letter, nongrazing BRDF measurements of rough and polished surface-reflecting materials at multiple incident angles are scaled by the microfacet cross section conversion term, but in the linear systems direction cosine space, resulting in great alignment of BRDF data at various incident angles in this space. This results in a predictive BRDF model for surface-reflecting materials at nongrazing angles, while avoiding some of the computational complexities in the linear systems diffraction model.
NASA Technical Reports Server (NTRS)
Rood, Richard B.; Douglass, Anne R.; Cerniglia, Mark C.; Sparling, Lynn C.; Nielsen, J. Eric
1999-01-01
We present a study of the distribution of ozone in the lowermost stratosphere with the goal of characterizing the observed variability. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High (low) potential vorticity at 300 hPa indicates that the tropopause is low (high), and the identification of these two groups is made to account for the dynamic variability. Conditional probability distribution functions are used to define the statistics of the ozone distribution from both observations and a three-dimensional model simulation using winds from the Goddard Earth Observing System Data Assimilation System for transport. Ozone data sets include ozonesonde observations from northern midlatitude stations (1991-96) and midlatitude observations made by the Halogen Occultation Experiment (HALOE) on the Upper Atmosphere Research Satellite (UARS) (1994- 1998). The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause (approximately 380K). The probability distribution functions are similar for the two data sources, despite differences in horizontal and vertical resolution and spatial and temporal sampling. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. Results show that during summer, much of the observed variability is explained by the height of the tropopause. During the winter and spring, when the tropopause fluctuations are larger, less of the variability is explained by tropopause height. This suggests that more mixing occurs during these seasons. During all seasons, there is a transition zone near the tropopause that contains air characteristic of both the troposphere and the stratosphere. The relevance of the results to the assessment of the environmental impact of aircraft effluence is also discussed.
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.
A Simulation of the ECSS Help Desk with the Erlang a Model
2011-03-01
a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au
Queues with Dropping Functions and General Arrival Processes
Chydzinski, Andrzej; Mrozowski, Pawel
2016-01-01
In a queueing system with the dropping function the arriving customer can be denied service (dropped) with the probability that is a function of the queue length at the time of arrival of this customer. The potential applicability of such mechanism is very wide due to the fact that by choosing the shape of this function one can easily manipulate several performance characteristics of the queueing system. In this paper we carry out analysis of the queueing system with the dropping function and a very general model of arrival process—the model which includes batch arrivals and the interarrival time autocorrelation, and allows for fitting the actual shape of the interarrival time distribution and its moments. For such a system we obtain formulas for the distribution of the queue length and the overall customer loss ratio. The analytical results are accompanied with numerical examples computed for several dropping functions. PMID:26943171
NASA Technical Reports Server (NTRS)
Convery, P. D.; Schriver, D.; Ashour-Abdalla, M.; Richard, R. L.
2002-01-01
Nongyrotropic plasma distribution functions can be formed in regions of space where guiding center motion breaks down as a result of strongly curved and weak ambient magnetic fields. Such are the conditions near the current sheet in the Earth's middle and distant magnetotail, where observations of nongyrotropic ion distributions have been made. Here a systematic parameter study of nongyrotropic proton distributions using electromagnetic hybrid simulations is made. We model the observed nongyrotropic distributions by removing a number of arc length segments from a cold ring distribution and find significant differences with the results of simulations that initially have a gyrotropic ring distribution. Model nongyrotropic distributions with initially small perpendicular thermalization produce growing fluctuations that diffuse the ions into a stable Maxwellian-like distribution within a few proton gyro periods. The growing waves produced by nongyrotropic distributions are similar to the electromagnetic proton cyclotron waves produced by a gyrotropic proton ring distribution in that they propagate parallel to the background magnetic field and occur at frequencies on the order of the proton gyrofrequency, The maximum energy of the fluctuating magnetic field increases as the initial proton distribution is made more nongyrotropic, that is, more highly bunched in perpendicular velocity space. This increase can be as much as twice the energy produced in the gyrotropic case.
Milky Way Mass Models and MOND
NASA Astrophysics Data System (ADS)
McGaugh, Stacy S.
2008-08-01
Using the Tuorla-Heidelberg model for the mass distribution of the Milky Way, I determine the rotation curve predicted by MOND (modified Newtonian dynamics). The result is in good agreement with the observed terminal velocities interior to the solar radius and with estimates of the Galaxy's rotation curve exterior thereto. There are no fit parameters: given the mass distribution, MOND provides a good match to the rotation curve. The Tuorla-Heidelberg model does allow for a variety of exponential scale lengths; MOND prefers short scale lengths in the range 2.0 kpc lesssim Rdlesssim 2.5 kpc. The favored value of Rd depends somewhat on the choice of interpolation function. There is some preference for the "simple" interpolation function as found by Famaey & Binney. I introduce an interpolation function that shares the advantages of the simple function on galaxy scales while having a much smaller impact in the solar system. I also solve the inverse problem, inferring the surface mass density distribution of the Milky Way from the terminal velocities. The result is a Galaxy with "bumps and wiggles" in both its luminosity profile and rotation curve that are reminiscent of those frequently observed in external galaxies.
Comprehensive Understanding for Vegetated Scene Radiance Relationships
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Deering, D. W.
1984-01-01
Directional reflectance distributions spanning the entire existent hemisphere were measured in two field studies; one using a Mark III 3-band radiometer and one using the rapid scanning bidirectional field instrument called PARABOLA. Surfaces measured included corn, soybeans, bare soils, grass lawn, orchard grass, alfalfa, cotton row crops, plowed field, annual grassland, stipa grass, hard wheat, salt plain shrubland, and irrigated wheat. Analysis of field data showed unique reflectance distributions ranging from bare soil to complete vegetation canopies. Physical mechanisms causing these trends were proposed. A 3-D model was developed and is unique in that it predicts: (1) the directional spectral reflectance factors as a function of the sensor's azimuth and zenith angles and the sensor's position above the canopy; (2) the spectral absorption as a function of location within the scene; and (3) the directional spectral radiance as a function of the sensor's location within the scene. Initial verification of the model as applied to a soybean row crop showed that the simulated directional data corresponded relatively well in gross trends to the measured data. The model was expanded to include the anisotropic scattering properties of leaves as a function of the leaf orientation distribution in both the zenith and azimuth angle modes.
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
Development of uncertainty-based work injury model using Bayesian structural equation modelling.
Chatterjee, Snehamoy
2014-01-01
This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
NASA Astrophysics Data System (ADS)
Zhao, Changhao; Hou, Dong; Chung, Ching-Chang; Yu, Yingying; Liu, Wenfeng; Li, Shengtao; Jones, Jacob L.
2017-11-01
The local structural behavior of PbZr0.5Ti0.5O3 (PZT 50/50) ceramics during application of an electric field was investigated using pair distribution function (PDF) analysis. In situ synchrotron total scattering was conducted, and the PDFs were calculated from the Fourier transform of the total scattering data. The PDF refinement of the zero-field data suggests a local-structure model with [001] Ti-displacement and negligible Zr-displacement. The directional PDFs at different field amplitudes indicate the bond-length distribution of the nearest Pb-B (B = Zr/Ti) pair changes significantly with the field. The radial distribution functions (RDFs) of a model for polarization rotation were calculated. The calculated and the experimental RDFs are consistent. This result suggests the changes in Pb-B bond-length distribution could be dominantly caused by polarization rotation. Peak fitting of the experimental RDFs was also conducted. The peak position trends with increasing field are mostly in agreement with the calculation result of the polarization rotation model. The area ratio of the peaks in the experimental RDFs also changed with field amplitude, indicating that Zr atoms have a detectable displacement driven by the electric field. Our study provides an experimental observation of the behaviors of PZT 50/50 under field at a local scale which supports the polarization rotation mechanism.
A portal for the ocean biogeographic information system
Zhang, Yunqing; Grassle, J. F.
2002-01-01
Since its inception in 1999 the Ocean Biogeographic Information System (OBIS) has developed into an international science program as well as a globally distributed network of biogeographic databases. An OBIS portal at Rutgers University provides the links and functional interoperability among member database systems. Protocols and standards have been established to support effective communication between the portal and these functional units. The portal provides distributed data searching, a taxonomy name service, a GIS with access to relevant environmental data, biological modeling, and education modules for mariners, students, environmental managers, and scientists. The portal will integrate Census of Marine Life field projects, national data archives, and other functional modules, and provides for network-wide analyses and modeling tools.
A size-structured model of bacterial growth and reproduction.
Ellermeyer, S F; Pilyugin, S S
2012-01-01
We consider a size-structured bacterial population model in which the rate of cell growth is both size- and time-dependent and the average per capita reproduction rate is specified as a model parameter. It is shown that the model admits classical solutions. The population-level and distribution-level behaviours of these solutions are then determined in terms of the model parameters. The distribution-level behaviour is found to be different from that found in similar models of bacterial population dynamics. Rather than convergence to a stable size distribution, we find that size distributions repeat in cycles. This phenomenon is observed in similar models only under special assumptions on the functional form of the size-dependent growth rate factor. Our main results are illustrated with examples, and we also provide an introductory study of the bacterial growth in a chemostat within the framework of our model.
Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon
2016-07-27
Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials.
A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.
2014-12-01
A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.
NASA Astrophysics Data System (ADS)
Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.
2016-07-01
Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.
A general framework for updating belief distributions.
Bissiri, P G; Holmes, C C; Walker, S G
2016-11-01
We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.
Evaluation model of distribution network development based on ANP and grey correlation analysis
NASA Astrophysics Data System (ADS)
Ma, Kaiqiang; Zhan, Zhihong; Zhou, Ming; Wu, Qiang; Yan, Jun; Chen, Genyong
2018-06-01
The existing distribution network evaluation system cannot scientifically and comprehensively reflect the distribution network development status. Furthermore, the evaluation model is monotonous and it is not suitable for horizontal analysis of many regional power grids. For these reason, this paper constructs a set of universal adaptability evaluation index system and model of distribution network development. Firstly, distribution network evaluation system is set up by power supply capability, power grid structure, technical equipment, intelligent level, efficiency of the power grid and development benefit of power grid. Then the comprehensive weight of indices is calculated by combining the AHP with the grey correlation analysis. Finally, the index scoring function can be obtained by fitting the index evaluation criterion to the curve, and then using the multiply plus operator to get the result of sample evaluation. The example analysis shows that the model can reflect the development of distribution network and find out the advantages and disadvantages of distribution network development. Besides, the model provides suggestions for the development and construction of distribution network.
Exact solution for the time evolution of network rewiring models
NASA Astrophysics Data System (ADS)
Evans, T. S.; Plato, A. D. K.
2007-05-01
We consider the rewiring of a bipartite graph using a mixture of random and preferential attachment. The full mean-field equations for the degree distribution and its generating function are given. The exact solution of these equations for all finite parameter values at any time is found in terms of standard functions. It is demonstrated that these solutions are an excellent fit to numerical simulations of the model. We discuss the relationship between our model and several others in the literature, including examples of urn, backgammon, and balls-in-boxes models, the Watts and Strogatz rewiring problem, and some models of zero range processes. Our model is also equivalent to those used in various applications including cultural transmission, family name and gene frequencies, glasses, and wealth distributions. Finally some Voter models and an example of a minority game also show features described by our model.
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Unraveling hadron structure with generalized parton distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrei Belitsky; Anatoly Radyushkin
2004-10-01
The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling andmore » QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.« less
A Functional Model for Management of Large Scale Assessments.
ERIC Educational Resources Information Center
Banta, Trudy W.; And Others
This functional model for managing large-scale program evaluations was developed and validated in connection with the assessment of Tennessee's Nutrition Education and Training Program. Management of such a large-scale assessment requires the development of a structure for the organization; distribution and recovery of large quantities of…
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh
2017-05-01
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
Design of Magnetic Charged Particle Lens Using Analytical Potential Formula
NASA Astrophysics Data System (ADS)
Al-Batat, A. H.; Yaseen, M. J.; Abbas, S. R.; Al-Amshani, M. S.; Hasan, H. S.
2018-05-01
In the current research was to benefit from the potential of the two cylindrical electric lenses to be used in the product a mathematical model from which, one can determine the magnetic field distribution of the charged particle objective lens. With aid of simulink in matlab environment, some simulink models have been building to determine the distribution of the target function and their related axial functions along the optical axis of the charged particle lens. The present study showed that the physical parameters (i.e., the maximum value, Bmax, and the half width W of the field distribution) and the objective properties of the charged particle lens have been affected by varying the main geometrical parameter of the lens named the bore radius R.
A model for the microwave emissivity of the ocean's surface as a function of wind speed
NASA Technical Reports Server (NTRS)
Wilheit, T. T.
1979-01-01
A quanitative model is presented which describes the ocean surface as a ensemble of flat facets with a normal distribution of slopes. The variance of the slope distribution is linearly related to frequency up to 35 GHz and constant at higher frequencies. These facets are partially covered with an absorbing nonpolarized foam layer. Experimental evidence is presented for this model.
NASA Astrophysics Data System (ADS)
Khajehei, Sepideh; Moradkhani, Hamid
2015-04-01
Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.
Made-to-measure modelling of observed galaxy dynamics
NASA Astrophysics Data System (ADS)
Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.
2018-01-01
Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.
NASA Astrophysics Data System (ADS)
Yu, Z. P.; Yue, Z. F.; Liu, W.
2018-05-01
With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.
NASA Astrophysics Data System (ADS)
Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine
2017-06-01
Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.
Passage relevance models for genomics search.
Urbain, Jay; Frieder, Ophir; Goharian, Nazli
2009-03-19
We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Waggoner, William Tracy
1990-01-01
Experimental capture cross sections d sigma / dtheta versus theta , are presented for various ions incident on neutral targets. First, distributions are presented for Ar ^{rm 8+} ions incident on H_{rm 2}, D _{rm 2}, and Ar targets. Energy gain studies indicate that capture occurs to primarily a 5d,f final state of Ar^{rm 7+} with some contributions from transfer ionization (T.I.) channels. Angular distribution spectra for all three targets are similar, with spectra having a main peak located at forward angles which is attributed to single capture events, and a secondary structure occurring at large angles which is attributed to T.I. contributions. A series of Ar^{rm 8+} on Ar spectra were collected using a retarding grid system as a low resolution energy spectrometer to resolve single capture events from T.I. events. The resulting single capture and T.I. angular distributions are presented. Results are discussed in terms of a classical deflection function employing a simple two state curve crossing model. Angular distributions for electron capture from He by C, N, O, F, and Ne ions with charge states from 5 ^+-8^+ are presented for projectile energies between 1.2 and 2.0 kV. Distributions for the same charge state but different ion species are simlar, but not identical with distributions for the 5 ^+ and 7^+ ions being strongly forward peaked, the 6^+ distributions are much less forward peaked with the O^{6+} distributions showing structure, the Ne^{8+} ion distribution appears to be an intermediate case between forward peaking and large angle scattering. These results are discussed in terms of classical deflection functions which utilize two state Coulomb diabatic curve crossing models. Finally, angular distributions are presented for electron capture from He by Ar^{rm 6+} ions at energies between 1287 eV and 296 eV. At large projectile energies the distribution is broad. As the energy decreases below 523 eV, distributions shift to forward angles with a second peak appearing outside the Coulomb angle, theta_{c} = Q/2E, which continues to grow in magnitude as the projectile energy decreases further. Results are compared with a model calculation employing a two state diabatic Coulomb curve crossing model and the classical deflection function.
NASA Astrophysics Data System (ADS)
Leon, R.; Somoza, L.
2009-04-01
This comunication presents a computational model for mapping the regional 3D distribution in which seafloor gas hydrates would be stable, that is carried out in a Geographical Information System (GIS) environment. The construction of the model is comprised of three primary steps, namely (1) the construction of surfaces for the various variables based on available 3D data (seafloor temperature, geothermal gradient and depth-pressure); (2) the calculation of the gas function equilibrium functions for the various hydrocarbon compositions reported from hydrate and sediment samples; and (3) the calculation of the thickness of the hydrate stability zone. The solution is based on a transcendental function, which is solved iteratively in a GIS environment. The model has been applied in the northernmost continental slope of the Gulf of Cadiz, an area where an abundant supply for hydrate formation, such as extensive hydrocarbon seeps, diapirs and fault structures, is combined with deep undercurrents and a complex seafloor morphology. In the Gulf of Cadiz, model depicts the distribution of the base of the gas hydrate stability zone for both biogenic and thermogenic gas compositions, and explains the geometry and distribution of geological structures derived from gas venting in the Tasyo Field (Gulf of Cadiz) and the generation of BSR levels on the upper continental slope.
NASA Technical Reports Server (NTRS)
Harper, Richard
1989-01-01
In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.
NASA Astrophysics Data System (ADS)
Kim, Y. W.; Cress, R. P.
2016-11-01
Disordered binary alloys are modeled as a randomly close-packed assembly of nanocrystallites intermixed with randomly positioned atoms, i.e., glassy-state matter. The nanocrystallite size distribution is measured in a simulated macroscopic medium in two dimensions. We have also defined, and measured, the degree of crystallinity as the probability of a particle being a member of nanocrystallites. Both the distribution function and the degree of crystallinity are found to be determined by alloy composition. When heated, the nanocrystallites become smaller in size due to increasing thermal fluctuation. We have modeled this phenomenon as a case of thermal dissociation by means of the law of mass action. The crystallite size distribution function is computed for AuCu3 as a function of temperature by solving some 12 000 coupled algebraic equations for the alloy. The results show that linear thermal expansion of the specimen has contributions from the temperature dependence of the degree of crystallinity, in addition to respective thermal expansions of the nanocrystallites and glassy-state matter.
Using special functions to model the propagation of airborne diseases
NASA Astrophysics Data System (ADS)
Bolaños, Daniela
2014-06-01
Some special functions of the mathematical physics are using to obtain a mathematical model of the propagation of airborne diseases. In particular we study the propagation of tuberculosis in closed rooms and we model the propagation using the error function and the Bessel function. In the model, infected individual emit pathogens to the environment and this infect others individuals who absorb it. The evolution in time of the concentration of pathogens in the environment is computed in terms of error functions. The evolution in time of the number of susceptible individuals is expressed by a differential equation that contains the error function and it is solved numerically for different parametric simulations. The evolution in time of the number of infected individuals is plotted for each numerical simulation. On the other hand, the spatial distribution of the pathogen around the source of infection is represented by the Bessel function K0. The spatial and temporal distribution of the number of infected individuals is computed and plotted for some numerical simulations. All computations were made using software Computer algebra, specifically Maple. It is expected that the analytical results that we obtained allow the design of treatment rooms and ventilation systems that reduce the risk of spread of tuberculosis.
Whitney, James E.; Whittier, Joanna B.; Paukert, Craig P.
2017-01-01
Environmental filtering and competitive exclusion are hypotheses frequently invoked in explaining species' environmental niches (i.e., geographic distributions). A key assumption in both hypotheses is that the functional niche (i.e., species traits) governs the environmental niche, but few studies have rigorously evaluated this assumption. Furthermore, phylogeny could be associated with these hypotheses if it is predictive of functional niche similarity via phylogenetic signal or convergent evolution, or of environmental niche similarity through phylogenetic attraction or repulsion. The objectives of this study were to investigate relationships between environmental niches, functional niches, and phylogenies of fishes of the Upper (UCRB) and Lower (LCRB) Colorado River Basins of southwestern North America. We predicted that functionally similar species would have similar environmental niches (i.e., environmental filtering) and that closely related species would be functionally similar (i.e., phylogenetic signal) and possess similar environmental niches (i.e., phylogenetic attraction). Environmental niches were quantified using environmental niche modeling, and functional similarity was determined using functional trait data. Nonnatives in the UCRB provided the only support for environmental filtering, which resulted from several warmwater nonnatives having dam number as a common predictor of their distributions, whereas several cool- and coldwater nonnatives shared mean annual air temperature as an important distributional predictor. Phylogenetic signal was supported for both natives and nonnatives in both basins. Lastly, phylogenetic attraction was only supported for native fishes in the LCRB and for nonnative fishes in the UCRB. Our results indicated that functional similarity was heavily influenced by evolutionary history, but that phylogenetic relationships and functional traits may not always predict the environmental distribution of species. However, the similarity of environmental niches among warmwater centrarchids, ictalurids, fundulids, and poeciliids in the UCRB indicated that dam removals could influence the distribution of these nonnatives simultaneously, thus providing greater conservation benefits. However, this same management strategy would have more limited effects on nonnative salmonids, catostomids, and percids with colder temperature preferences, thus necessitating other management strategies to control these species.
Resistance distribution in the hopping percolation model.
Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad
2005-07-01
We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.
Virtual gonio-spectrophotometer for validation of BRDF designs
NASA Astrophysics Data System (ADS)
Mihálik, Andrej; Ďurikovič, Roman
2011-10-01
Measurement of the appearance of an object consists of a group of measurements to characterize the color and surface finish of the object. This group of measurements involves the spectral energy distribution of propagated light measured in terms of reflectance and transmittance, and the spatial energy distribution of that light measured in terms of the bidirectional reflectance distribution function (BRDF). In this article we present the virtual gonio-spectrophotometer, a device that measures flux (power) as a function of illumination and observation. Virtual gonio-spectrophotometer measurements allow the determination of the scattering profile of specimens that can be used to verify the physical characteristics of the computer model used to simulate the scattering profile. Among the characteristics that we verify is the energy conservation of the computer model. A virtual gonio-spectrophotometer is utilized to find the correspondence between industrial measurements obtained from gloss meters and the parameters of a computer reflectance model.
Competition between pressure and gravity confinement in Lyman Alpha forest observations
NASA Technical Reports Server (NTRS)
Charlton, Jane C.; Salpeter, Edwin E.; Linder, Suzanne M.
1994-01-01
A break in the distribution function of Lyman Alpha clouds (at a typical redshift of 2.5) has been reported by Petit-jean et al. (1993). This feature is what would be expected from a transition between pressure confinement and gravity confinement (as predicted in Charlton, Salpeter & Hogan 1993). The column density at which the feature occurs has been used to determine the external confining pressure approximately 10 per cu cm K, which could be due to a hot, intergalactic medium. For models that provide a good fit to the data, the contribution of the gas in clouds to omega is small. The specific shape of the distribution function at the transition (predicted by models to have a nonmonotonic slope) can serve as a diagnostic of the distribution of dark matter around Lyman Alpha forest clouds, and the present data already eliminate certain models.
Li, Xiaolu; Liang, Yu; Xu, Lijun
2014-09-01
To provide a credible model for light detection and ranging (LiDAR) target classification, the focus of this study is on the relationship between intensity data of LiDAR and the bidirectional reflectance distribution function (BRDF). An integration method based on the built-in-lab coaxial laser detection system was advanced. A kind of intermediary BRDF model advanced by Schlick was introduced into the integration method, considering diffuse and specular backscattering characteristics of the surface. A group of measurement campaigns were carried out to investigate the influence of the incident angle and detection range on the measured intensity data. Two extracted parameters r and S(λ) are influenced by different surface features, which illustrate the surface features of the distribution and magnitude of reflected energy, respectively. The combination of two parameters can be used to describe the surface characteristics for target classification in a more plausible way.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
Accurate mass and velocity functions of dark matter haloes
NASA Astrophysics Data System (ADS)
Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly
2017-08-01
N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (<2 per cent level) model of the distinct halo mass function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z < 2.3 to push for the development of halo occupation distribution using Vmax. The data and the analysis code are made publicly available in the Skies and Universes data base.
NASA Technical Reports Server (NTRS)
Khazanov, G. V.; Liemohn, M. W.; Kozyra, J. U.; Moore, T. E.
1998-01-01
Two time-dependent kinetic models of superthermal electron transport are combined to conduct global calculations of the nonthermal electron distribution function throughout the inner magnetosphere. It is shown that the energy range of validity for this combined model extends down to the superthermal-thermal intersection at a few eV, allowing for the calculation of the en- tire distribution function and thus an accurate heating rate to the thermal plasma. Because of the linearity of the formulas, the source terms are separated to calculate the distributions from the various populations, namely photoelectrons (PEs) and plasma sheet electrons (PSEs). These distributions are discussed in detail, examining the processes responsible for their formation in the various regions of the inner magnetosphere. It is shown that convection, corotation, and Coulomb collisions are the dominant processes in the formation of the PE distribution function and that PSEs are dominated by the interplay between the drift terms. Of note is that the PEs propagate around the nightside in a narrow channel at the edge of the plasmasphere as Coulomb collisions reduce the fluxes inside of this and convection compresses the flux tubes inward. These distributions are then recombined to show the development of the total superthermal electron distribution function in the inner magnetosphere and their influence on the thermal plasma. PEs usually dominate the dayside heating, with integral energy fluxes to the ionosphere reaching 10(exp 10) eV/sq cm/s in the plasmasphere, while heating from the PSEs typically does not exceed 10(exp 8) eV/sq cm/s. On the nightside, the inner plasmasphere is usually unheated by superthermal electrons. A feature of these combined spectra is that the distribution often has upward slopes with energy, particularly at the crossover from PE to PSE dominance, indicating that instabilities are possible.
Studies of transverse momentum dependent parton distributions and Bessel weighting
Aghasyan, M.; Avakian, H.; De Sanctis, E.; ...
2015-03-01
In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less
Studies of transverse momentum dependent parton distributions and Bessel weighting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghasyan, M.; Avakian, H.; De Sanctis, E.
In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less
Fault Detection of Rotating Machinery using the Spectral Distribution Function
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1997-01-01
The spectral distribution function is introduced to characterize the process leading to faults in rotating machinery. It is shown to be a more robust indicator than conventional power spectral density estimates, but requires only slightly more computational effort. The method is illustrated with examples from seeded gearbox transmission faults and an analytical model of a defective bearing. Procedures are suggested for implementation in realistic environments.
On the velocity distribution of ion jets during substorm recovery
NASA Technical Reports Server (NTRS)
Birn, J.; Forbes, T. G.; Hones, E. W., Jr.; Bame, S. J.; Paschmann, G.
1981-01-01
The velocity distribution of earthward jetting ions that are observed principally during substorm recovery by satellites at approximately 15-35 earth radii in the magnetotail is quantitatively compared with two different theoretical models - the 'adiabatic deformation' of an initially flowing Maxwellian moving into higher magnetic field strength (model A) and the field-aligned electrostatic acceleration of an initially nonflowing isotropic Maxwellian including adiabatic deformation effects (model B). The assumption is made that the ions are protons or, more generally, that they consist of only one species. It is found that both models can explain the often observed concave-convex shape of isodensity contours of the distribution function.
Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine
NASA Astrophysics Data System (ADS)
Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.
2018-04-01
The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.
Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; ...
2014-12-31
During CO 2 injection and storage in deep reservoirs, the injected CO 2 enters into an initially brine saturated porous medium, and after the injection stops, natural groundwater flow eventually displaces the injected mobile-phase CO 2, leaving behind residual non-wetting fluid. Accurate modeling of two-phase flow processes are needed for predicting fate and transport of injected CO 2, evaluating environmental risks and designing more effective storage schemes. The entrapped non-wetting fluid saturation is typically a function of the spatially varying maximum saturation at the end of injection. At the pore-scale, distribution of void sizes and connectivity of void space playmore » a major role for the macroscopic hysteresis behavior and capillary entrapment of wetting and non-wetting fluids. This paper presents development of an approach based on the connectivity of void space for modeling hysteretic capillary pressure-saturation-relative permeability relationships. The new approach uses void-size distribution and a measure of void space connectivity to compute the hysteretic constitutive functions and to predict entrapped fluid phase saturations. Two functions, the drainage connectivity function and the wetting connectivity function, are introduced to characterize connectivity of fluids in void space during drainage and wetting processes. These functions can be estimated through pore-scale simulations in computer-generated porous media or from traditional experimental measurements of primary drainage and main wetting curves. The hysteresis model for saturation-capillary pressure is tested successfully by comparing the model-predicted residual saturation and scanning curves with actual data sets obtained from column experiments found in the literature. A numerical two-phase model simulator with the new hysteresis functions is tested against laboratory experiments conducted in a quasi-two-dimensional flow cell (91.4cm×5.6cm×61cm), packed with homogeneous and heterogeneous sands. Initial results show that the model can predict spatial and temporal distribution of injected fluid during the experiments reasonably well. However, further analyses are needed for comprehensively testing the ability of the model to predict transient two-phase flow processes and capillary entrapment in geological reservoirs during geological carbon sequestration.« less
Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing
2018-02-01
Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.
Weak annihilation cusp inside the dark matter spike about a black hole.
Shapiro, Stuart L; Shelton, Jessie
2016-06-15
We reinvestigate the effect of annihilations on the distribution of collisionless dark matter (DM) in a spherical density spike around a massive black hole. We first construct a very simple, pedagogic, analytic model for an isotropic phase space distribution function that accounts for annihilation and reproduces the "weak cusp" found by Vasiliev for DM deep within the spike and away from its boundaries. The DM density in the cusp varies as r -1/2 for s -wave annihilation, where r is the distance from the central black hole, and is not a flat "plateau" profile. We then extend this model by incorporating a loss cone that accounts for the capture of DM particles by the hole. The loss cone is implemented by a boundary condition that removes capture orbits, resulting in an anisotropic distribution function. Finally, we evolve an initial spike distribution function by integrating the Boltzmann equation to show how the weak cusp grows and its density decreases with time. We treat two cases, one for s -wave and the other for p -wave DM annihilation, adopting parameters characteristic of the Milky Way nuclear core and typical WIMP models for DM. The cusp density profile for p -wave annihilation is weaker, varying like ~ r -0.34 , but is still not a flat plateau.
NASA Astrophysics Data System (ADS)
Alves, S. G.; Martins, M. L.
2010-09-01
Aggregation of animal cells in culture comprises a series of motility, collision and adhesion processes of basic relevance for tissue engineering, bioseparations, oncology research and in vitro drug testing. In the present paper, a cluster-cluster aggregation model with stochastic particle replication and chemotactically driven motility is investigated as a model for the growth of animal cells in culture. The focus is on the scaling laws governing the aggregation kinetics. Our simulations reveal that in the absence of chemotaxy the mean cluster size and the total number of clusters scale in time as stretched exponentials dependent on the particle replication rate. Also, the dynamical cluster size distribution functions are represented by a scaling relation in which the scaling function involves a stretched exponential of the time. The introduction of chemoattraction among the particles leads to distribution functions decaying as power laws with exponents that decrease in time. The fractal dimensions and size distributions of the simulated clusters are qualitatively discussed in terms of those determined experimentally for several normal and tumoral cell lines growing in culture. It is shown that particle replication and chemotaxy account for the simplest cluster size distributions of cellular aggregates observed in culture.
45 CFR 310.10 - What are the functional requirements for the Model Tribal IV-D System?
Code of Federal Regulations, 2013 CFR
2013-10-01
... Tribal financial management and expenditure information; (d) Distribute current support and arrearage..., process and monitor accounts receivable on all amounts owed, collected, and distributed with regard to: (1...
45 CFR 310.10 - What are the functional requirements for the Model Tribal IV-D System?
Code of Federal Regulations, 2012 CFR
2012-10-01
... Tribal financial management and expenditure information; (d) Distribute current support and arrearage..., process and monitor accounts receivable on all amounts owed, collected, and distributed with regard to: (1...
45 CFR 310.10 - What are the functional requirements for the Model Tribal IV-D System?
Code of Federal Regulations, 2014 CFR
2014-10-01
... Tribal financial management and expenditure information; (d) Distribute current support and arrearage..., process and monitor accounts receivable on all amounts owed, collected, and distributed with regard to: (1...
Two-component Jaffe models with a central black hole - I. The spherical case
NASA Astrophysics Data System (ADS)
Ciotti, Luca; Ziaee Lorzad, Azadeh
2018-02-01
Dynamical properties of spherically symmetric galaxy models where both the stellar and total mass density distributions are described by the Jaffe (1983) profile (with different scalelengths and masses) are presented. The orbital structure of the stellar component is described by Osipkov-Merritt anisotropy, and a black hole (BH) is added at the centre of the galaxy; the dark matter halo is isotropic. First, the conditions required to have a nowhere negative and monotonically decreasing dark matter halo density profile are derived. We then show that the phase-space distribution function can be recovered by using the Lambert-Euler W function, while in absence of the central BH only elementary functions appears in the integrand of the inversion formula. The minimum value of the anisotropy radius for consistency is derived in terms of the galaxy parameters. The Jeans equations for the stellar component are solved analytically, and the projected velocity dispersion at the centre and at large radii are also obtained analytically for generic values of the anisotropy radius. Finally, the relevant global quantities entering the Virial Theorem are computed analytically, and the fiducial anisotropy limit required to prevent the onset of Radial Orbit Instability is determined as a function of the galaxy parameters. The presented models, even though highly idealized, represent a substantial generalization of the models presented in Ciotti, and can be useful as starting point for more advanced modelling, the dynamics and the mass distribution of elliptical galaxies.
A Bayesian kriging approach for blending satellite and ground precipitation observations
Verdin, Andrew P.; Rajagopalan, Balaji; Kleiber, William; Funk, Christopher C.
2015-01-01
Drought and flood management practices require accurate estimates of precipitation. Gauge observations, however, are often sparse in regions with complicated terrain, clustered in valleys, and of poor quality. Consequently, the spatial extent of wet events is poorly represented. Satellite-derived precipitation data are an attractive alternative, though they tend to underestimate the magnitude of wet events due to their dependency on retrieval algorithms and the indirect relationship between satellite infrared observations and precipitation intensities. Here we offer a Bayesian kriging approach for blending precipitation gauge data and the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates for Central America, Colombia, and Venezuela. First, the gauge observations are modeled as a linear function of satellite-derived estimates and any number of other variables—for this research we include elevation. Prior distributions are defined for all model parameters and the posterior distributions are obtained simultaneously via Markov chain Monte Carlo sampling. The posterior distributions of these parameters are required for spatial estimation, and thus are obtained prior to implementing the spatial kriging model. This functional framework is applied to model parameters obtained by sampling from the posterior distributions, and the residuals of the linear model are subject to a spatial kriging model. Consequently, the posterior distributions and uncertainties of the blended precipitation estimates are obtained. We demonstrate this method by applying it to pentadal and monthly total precipitation fields during 2009. The model's performance and its inherent ability to capture wet events are investigated. We show that this blending method significantly improves upon the satellite-derived estimates and is also competitive in its ability to represent wet events. This procedure also provides a means to estimate a full conditional distribution of the “true” observed precipitation value at each grid cell.
scoringRules - A software package for probabilistic model evaluation
NASA Astrophysics Data System (ADS)
Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian
2016-04-01
Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.
Modelling road accident blackspots data with the discrete generalized Pareto distribution.
Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María
2014-10-01
This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Neti, Prasad V.S.V.; Howell, Roger W.
2010-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086
Neti, Prasad V.S.V.; Howell, Roger W.
2008-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316
Time behavior of solar flare particles to 5 AU
NASA Technical Reports Server (NTRS)
Haffner, J. W.
1972-01-01
A simple model of solar flare radiation event particle transport is developed to permit the calculation of fluxes and related quantities as a function of distance from the sun (R). This model assumes the particles spiral around the solar magnetic field lines with a constant pitch angle. The particle angular distributions and onset plus arrival times as functions of energy at 1 AU agree with observations if the pitch angle distribution peaks near 90 deg. As a consequence the time dependence factor is essentially proportional to R/1.7, (R in AU), and the event flux is proportional to R/2.
Adiabatic description of long range frequency sweeping
NASA Astrophysics Data System (ADS)
Breizman, Boris; Nyqvist, Robert; Lilley, Matthew
2012-10-01
A theoretical framework is developed to describe long range frequency sweeping events in the 1D electrostatic bump-on-tail model with fast particle sources and collisions. The model includes three collision operators (Krook, drag (dynamical friction) and velocity space diffusion), and allows for a general shape of the fast particle distribution function. The behavior of phase space holes and clumps is analyzed, and the effect of particle trapping due to separatrix expansion is discussed. With a fast particle distribution function whose slope decays above the resonant phase velocity, hooked frequency sweeping is found for holes in the presence of drag collisions alone.
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Mitchell, Travis; Leonardi, Christopher; Bolster, Diogo
2017-11-01
Based on phase-field theory, we introduce a robust lattice-Boltzmann equation for modeling immiscible multiphase flows at large density and viscosity contrasts. Our approach is built by modifying the method proposed by Zu and He [Phys. Rev. E 87, 043301 (2013), 10.1103/PhysRevE.87.043301] in such a way as to improve efficiency and numerical stability. In particular, we employ a different interface-tracking equation based on the so-called conservative phase-field model, a simplified equilibrium distribution that decouples pressure and velocity calculations, and a local scheme based on the hydrodynamic distribution functions for calculation of the stress tensor. In addition to two distribution functions for interface tracking and recovery of hydrodynamic properties, the only nonlocal variable in the proposed model is the phase field. Moreover, within our framework there is no need to use biased or mixed difference stencils for numerical stability and accuracy at high density ratios. This not only simplifies the implementation and efficiency of the model, but also leads to a model that is better suited to parallel implementation on distributed-memory machines. Several benchmark cases are considered to assess the efficacy of the proposed model, including the layered Poiseuille flow in a rectangular channel, Rayleigh-Taylor instability, and the rise of a Taylor bubble in a duct. The numerical results are in good agreement with available numerical and experimental data.
A drug procurement, storage and distribution model in public hospitals in a developing country.
Kjos, Andrea L; Binh, Nguyen Thanh; Robertson, Caitlin; Rovers, John
2016-01-01
There is growing interest in pharmaceutical supply chains and distribution of medications at national and international levels. Issues of access and efficiency have been called into question. However, evaluations of system outcomes are not possible unless there are contextual data to describe the systems in question. Available guidelines provided by international advisory bodies such as the World Health Organization and the International Pharmacy Federation may be useful for developing countries like Vietnam when seeking to describe the pharmaceutical system. The purpose of this study was to describe a conceptual model for drug procurement, storage, and distribution in four government-owned hospitals in Vietnam. This study was qualitative and used semi-structured interviews with key informants from within the Vietnamese pharmaceutical system. Translated transcriptions were used to conduct a content analysis of the data. A conceptual model for the Vietnamese pharmaceutical system was described using structural and functional components. This model showed that in Vietnam, governmental policy influences the structural framework of the system, but allows for flexibility at the functional level of practice. Further, this model can be strongly differentiated from the models described by international advisory bodies. This study demonstrates a method for health care systems to describe their own models of drug distribution to address quality assurance, systems design and benchmarking for quality improvement. Copyright © 2016 Elsevier Inc. All rights reserved.
Effects of Acids, Bases, and Heteroatoms on Proximal Radial Distribution Functions for Proteins
Nguyen, Bao Linh; Pettitt, B. Montgomery
2015-01-01
The proximal distribution of water around proteins is a convenient method of quantifying solvation. We consider the effect of charged and sulfur-containing amino acid side-chain atoms on the proximal radial distribution function (pRDF) of water molecules around proteins using side-chain analogs. The pRDF represents the relative probability of finding any solvent molecule at a distance from the closest or surface perpendicular protein atom. We consider the near-neighbor distribution. Previously, pRDFs were shown to be universal descriptors of the water molecules around C, N, and O atom types across hundreds of globular proteins. Using averaged pRDFs, a solvent density around any globular protein can be reconstructed with controllable relative error. Solvent reconstruction using the additional information from charged amino acid side-chain atom types from both small models and protein averages reveals the effects of surface charge distribution on solvent density and improves the reconstruction errors relative to simulation. Solvent density reconstructions from the small-molecule models are as effective and less computationally demanding than reconstructions from full macromolecular models in reproducing preferred hydration sites and solvent density fluctuations. PMID:26388706
Multi-objective possibilistic model for portfolio selection with transaction cost
NASA Astrophysics Data System (ADS)
Jana, P.; Roy, T. K.; Mazumder, S. K.
2009-06-01
In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele
2017-04-01
Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.
An Investigation of the Pareto Distribution as a Model for High Grazing Angle Clutter
2011-03-01
radar detection schemes under controlled conditions. Complicated clutter models result in mathematical difficulties in the determination of optimal and...a population [7]. It has been used in the modelling of actuarial data; an example is in excess of loss quotations in insurance [8]. Its usefulness as...UNCLASSIFIED modified Bessel functions, making it difficult to employ in radar detection schemes. The Pareto Distribution is amenable to mathematical
ERIC Educational Resources Information Center
Dillenbourg, Pierre
1996-01-01
Maintains that diagnosis, explanation, and tutoring, the functions of an interactive learning environment, are collaborative processes. Examines how human-computer interaction can be improved using a distributed cognition framework. Discusses situational and distributed knowledge theories and provides a model on how they can be used to redesign…
Functional models for colloid retention in porous media at the triple line.
Dathe, Annette; Zevi, Yuniati; Richards, Brian K; Gao, Bin; Parlange, J-Yves; Steenhuis, Tammo S
2014-01-01
Spectral confocal microscope visualizations of microsphere movement in unsaturated porous media showed that attachment at the Air Water Solid (AWS) interface was an important retention mechanism. These visualizations can aid in resolving the functional form of retention rates of colloids at the AWS interface. In this study, soil adsorption isotherm equations were adapted by replacing the chemical concentration in the water as independent variable by the cumulative colloids passing by. In order of increasing number of fitted parameters, the functions tested were the Langmuir adsorption isotherm, the Logistic distribution, and the Weibull distribution. The functions were fitted against colloid concentrations obtained from time series of images acquired with a spectral confocal microscope for three experiments performed where either plain or carboxylated polystyrene latex microspheres were pulsed in a small flow chamber filled with cleaned quartz sand. Both moving and retained colloids were quantified over time. In fitting the models to the data, the agreement improved with increasing number of model parameters. The Weibull distribution gave overall the best fit. The logistic distribution did not fit the initial retention of microspheres well but otherwise the fit was good. The Langmuir isotherm only fitted the longest time series well. The results can be explained that initially when colloids are first introduced the rate of retention is low. Once colloids are at the AWS interface they act as anchor point for other colloids to attach and thereby increasing the retention rate as clusters form. Once the available attachment sites diminish, the retention rate decreases.
A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)
ERIC Educational Resources Information Center
Arenson, Ethan A.; Karabatsos, George
2017-01-01
Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…
Correlation of hard X-ray and type 3 bursts in solar flares
NASA Technical Reports Server (NTRS)
Petrosian, V.; Leach, J.
1982-01-01
Correlations between X-ray and type 3 radio emission of solar bursts are described through a bivariate distribution function. Procedures for determining the form of this distribution are described. A model is constructed to explain the correlation between the X-ray spectral index and the ratio of X-ray to radio intensities. Implications of the model are discussed.
Degradation data analysis based on a generalized Wiener process subject to measurement error
NASA Astrophysics Data System (ADS)
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
NASA Technical Reports Server (NTRS)
Mahan, J. R.; Tira, Nour E.
1991-01-01
An improved dynamic electrothermal model for the Earth Radiation Budget Experiment (ERBE) total, nonscanning channels is formulated. This model is then used to accurately simulate two types of dynamic solar observation: the solar calibration and the so-called pitchover maneuver. Using a second model, the nonscanner active cavity radiometer (ACR) thermal noise is studied. This study reveals that radiative emission and scattering by the surrounding parts of the nonscanner cavity are acceptably small. The dynamic electrothermal model is also used to compute ACR instrument transfer function. Accurate in-flight measurement of this transfer function is shown to depend on the energy distribution over the frequency spectrum of the radiation input function. A new array-type field of view limiter, whose geometry controls the input function, is proposed for in-flight calibration of an ACR and other types of radiometers. The point spread function (PSF) of the ERBE and the Clouds and Earth's Radiant Energy System (CERES) scanning radiometers is computed. The PSF is useful in characterizing the channel optics. It also has potential for recovering the distribution of the radiative flux from Earth by deconvolution.
Binder model system to be used for determination of prepolymer functionality
NASA Technical Reports Server (NTRS)
Martinelli, F. J.; Hodgkin, J. H.
1971-01-01
Development of a method for determining the functionality distribution of prepolymers used for rocket binders is discussed. Research has been concerned with accurately determining the gel point of a model polyester system containing a single trifunctional crosslinker, and the application of these methods to more complicated model systems containing a second trifunctional crosslinker, monofunctional ingredients, or a higher functionality crosslinker. Correlations of observed with theoretical gel points for these systems would allow the methods to be applied directly to prepolymers.
Multivariate η-μ fading distribution with arbitrary correlation model
NASA Astrophysics Data System (ADS)
Ghareeb, Ibrahim; Atiani, Amani
2018-03-01
An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.
Tarlochan, Faris; Mehboob, Hassan; Mehboob, Ali; Chang, Seung-Hwan
2018-06-01
Cementless hip prostheses with porous outer coating are commonly used to repair the proximally damaged femurs. It has been demonstrated that stability of prosthesis is also highly dependent on the bone ingrowth into the porous texture. Bone ingrowth is influenced by the mechanical environment produced in the callus. In this study, bone ingrowth into the porous structure was predicted by using a mechano-regulatory model. Homogenously distributed pores (200 and 800 [Formula: see text]m in diameter) and functionally graded pores along the length of the prosthesis were introduced as a porous coating. Bone ingrowth was simulated using 25 and 12 [Formula: see text]m micromovements. Load control simulations were carried out instead of traditionally used displacement control. Spatial and temporal distributions of tissues were predicted in all cases. Functionally graded pore decreasing models gave the most homogenous bone distribution, the highest bone ingrowth (98%) with highest average Young's modulus of all tissue phenotypes approximately 4.1 GPa. Besides this, the volume of the initial callus increased to 8.33% in functionally graded pores as compared to the 200 [Formula: see text]m pore size models which increased the bone volume. These findings indicate that functionally graded porous surface promote bone ingrowth efficiently which can be considered to design of surface texture of hip prosthesis.
Spacing distribution functions for 1D point island model with irreversible attachment
NASA Astrophysics Data System (ADS)
Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto
2011-03-01
We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippov, A. V., E-mail: fav@triniti.ru; Dyatko, N. A.; Kostenko, A. S.
2014-11-15
The charging of dust particles in weakly ionized inert gases at atmospheric pressure has been investigated. The conditions under which the gas is ionized by an external source, a beam of fast electrons, are considered. The electron energy distribution function in argon, krypton, and xenon has been calculated for three rates of gas ionization by fast electrons: 10{sup 13}, 10{sup 14}, and 10{sup 15} cm{sup −1}. A model of dust particle charging with allowance for the nonlocal formation of the electron energy distribution function in the region of strong plasma quasi-neutrality violation around the dust particle is described. The nonlocalitymore » is taken into account in an approximation where the distribution function is a function of only the total electron energy. Comparative calculations of the dust particle charge with and without allowance for the nonlocality of the electron energy distribution function have been performed. Allowance for the nonlocality is shown to lead to a noticeable increase in the dust particle charge due to the influence of the group of hot electrons from the tail of the distribution function. It has been established that the screening constant virtually coincides with the smallest screening constant determined according to the asymptotic theory of screening with the electron transport and recombination coefficients in an unperturbed plasma.« less
Predicting species distributions from checklist data using site-occupancy models
Kery, M.; Gardner, B.; Monnerat, C.
2010-01-01
Aim: (1) To increase awareness of the challenges induced by imperfect detection, which is a fundamental issue in species distribution modelling; (2) to emphasize the value of replicate observations for species distribution modelling; and (3) to show how 'cheap' checklist data in faunal/floral databases may be used for the rigorous modelling of distributions by site-occupancy models. Location: Switzerland. Methods: We used checklist data collected by volunteers during 1999 and 2000 to analyse the distribution of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly in Switzerland. We used data from repeated visits to 1-ha pixels to derive 'detection histories' and apply site-occupancy models to estimate the 'true' species distribution, i.e. corrected for imperfect detection. We modelled blue hawker distribution as a function of elevation and year and its detection probability of elevation, year and season. Results: The best model contained cubic polynomial elevation effects for distribution and quadratic effects of elevation and season for detectability. We compared the site-occupancy model with a conventional distribution model based on a generalized linear model, which assumes perfect detectability (p = 1). The conventional distribution map looked very different from the distribution map obtained using site-occupancy models that accounted for the imperfect detection. The conventional model underestimated the species distribution by 60%, and the slope parameters of the occurrence-elevation relationship were also underestimated when assuming p = 1. Elevation was not only an important predictor of blue hawker occurrence, but also of the detection probability, with a bell-shaped relationship. Furthermore, detectability increased over the season. The average detection probability was estimated at only 0.19 per survey. Main conclusions: Conventional species distribution models do not model species distributions per se but rather the apparent distribution, i.e. an unknown proportion of species distributions. That unknown proportion is equivalent to detectability. Imperfect detection in conventional species distribution models yields underestimates of the extent of distributions and covariate effects that are biased towards zero. In addition, patterns in detectability will erroneously be ascribed to species distributions. In contrast, site-occupancy models applied to replicated detection/non-detection data offer a powerful framework for making inferences about species distributions corrected for imperfect detection. The use of 'cheap' checklist data greatly enhances the scope of applications of this useful class of models. ?? 2010 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Song, Qiankun; Cao, Jinde
2007-05-01
A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Ghobakhloo, Marzieh; Zomorrodian, Mohammad Ebrahim; Javidan, Kurosh
2018-05-01
Propagation of dustion acoustic solitary waves (DIASWs) and double layers is discussed in earth atmosphere, using the Sagdeev potential method. The best model for distribution function of electrons in earth atmosphere is found by fitting available data on different distribution functions. The nonextensive function with parameter q = 0.58 provides the best fit on observations. Thus we analyze the propagation of localized waves in an unmagnetized plasma containing nonextensive electrons, inertial ions, and negatively/positively charged stationary dust. It is found that both compressive and rarefactive solitons as well as double layers exist depending on the sign (and the value) of dust polarity. Characters of propagated waves are described using the presented model.
Olds, Daniel; Wang, Hsiu -Wen; Page, Katharine L.
2015-09-04
In this work we discuss the potential problems and currently available solutions in modeling powder-diffraction based pair-distribution function (PDF) data from systems where morphological feature information content includes distances in the nanometer length scale, such as finite nanoparticles, nanoporous networks, and nanoscale precipitates in bulk materials. The implications of an experimental finite minimum Q-value are addressed by simulation, which also demonstrates the advantages of combining PDF data with small angle scattering data (SAS). In addition, we introduce a simple Fortran90 code, DShaper, which may be incorporated into PDF data fitting routines in order to approximate the so-called shape-function for anymore » atomistic model.« less
Assal, Timothy J.; Anderson, Patrick J.; Sibold, Jason
2015-01-01
The availability of land cover data at local scales is an important component in forest management and monitoring efforts. Regional land cover data seldom provide detailed information needed to support local management needs. Here we present a transferable framework to model forest cover by major plant functional type using aerial photos, multi-date Système Pour l’Observation de la Terre (SPOT) imagery, and topographic variables. We developed probability of occurrence models for deciduous broad-leaved forest and needle-leaved evergreen forest using logistic regression in the southern portion of the Wyoming Basin Ecoregion. The model outputs were combined into a synthesis map depicting deciduous and coniferous forest cover type. We evaluated the models and synthesis map using a field-validated, independent data source. Results showed strong relationships between forest cover and model variables, and the synthesis map was accurate with an overall correct classification rate of 0.87 and Cohen’s kappa value of 0.81. The results suggest our method adequately captures the functional type, size, and distribution pattern of forest cover in a spatially heterogeneous landscape.
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2016-12-01
Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.
Gravity dual for a model of perception
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayama, Yu, E-mail: nakayama@berkeley.edu
2011-01-15
One of the salient features of human perception is its invariance under dilatation in addition to the Euclidean group, but its non-invariance under special conformal transformation. We investigate a holographic approach to the information processing in image discrimination with this feature. We claim that a strongly coupled analogue of the statistical model proposed by Bialek and Zee can be holographically realized in scale invariant but non-conformal Euclidean geometries. We identify the Bayesian probability distribution of our generalized Bialek-Zee model with the GKPW partition function of the dual gravitational system. We provide a concrete example of the geometric configuration based onmore » a vector condensation model coupled with the Euclidean Einstein-Hilbert action. From the proposed geometry, we study sample correlation functions to compute the Bayesian probability distribution.« less
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
Pressure balance inconsistency exhibited in a statistical model of magnetospheric plasma
NASA Astrophysics Data System (ADS)
Garner, T. W.; Wolf, R. A.; Spiro, R. W.; Thomsen, M. F.; Korth, H.
2003-08-01
While quantitative theories of plasma flow from the magnetotail to the inner magnetosphere typically assume adiabatic convection, it has long been understood that these convection models tend to overestimate the plasma pressure in the inner magnetosphere. This phenomenon is called the pressure crisis or the pressure balance inconsistency. In order to analyze it in a new and more detailed manner we utilize an empirical model of the proton and electron distribution functions in the near-Earth plasma sheet (-50 RE < X < -10 RE), which uses the [1989] magnetic field model and a plasma sheet representation based upon several previously published statistical studies. We compare our results to a statistically derived particle distribution function at geosynchronous orbit. In this analysis the particle distribution function is characterized by the isotropic energy invariant λ = EV2/3, where E is the particle's kinetic energy and V is the magnetic flux tube volume. The energy invariant is conserved in guiding center drift under the assumption of strong, elastic pitch angle scattering. If, in addition, loss is negligible, the phase space density f(λ) is also conserved along the same path. The statistical model indicates that f(λ, ?) is approximately independent of X for X ≤ -35 RE but decreases with increasing X for X ≥ -35 RE. The tailward gradient of f(λ, ?) might be attributed to gradient/curvature drift for large isotropic energy invariants but not for small invariants. The tailward gradient of the distribution function indicates a violation of the adiabatic drift condition in the plasma sheet. It also confirms the existence of a "number crisis" in addition to the pressure crisis. In addition, plasma sheet pressure gradients, when crossed with the gradient of flux tube volume computed from the [1989] magnetic field model, indicate Region 1 currents on the dawn and dusk sides of the outer plasma sheet.
Kendal, W S
2000-04-01
To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.
A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing
NASA Astrophysics Data System (ADS)
Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.
2018-05-01
Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.
Water bag modeling of a multispecies plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, P.; Gravier, E.; Besse, N.
2011-03-15
We report in the present paper a new modeling method to study multiple species dynamics in magnetized plasmas. Such a method is based on the gyrowater bag modeling, which consists in using a multistep-like distribution function along the velocity direction parallel to the magnetic field. The choice of a water bag representation allows an elegant link between kinetic and fluid descriptions of a plasma. The gyrowater bag model has been recently adapted to the context of strongly magnetized plasmas. We present its extension to the case of multi ion species magnetized plasmas: each ion species being modeled via a multiwatermore » bag distribution function. The water bag modelization will be discussed in details, under the simplification of a cylindrical geometry that is convenient for linear plasma devices. As an illustration, results obtained in the linear framework for ion temperature gradient instabilities are presented, that are shown to agree qualitatively with older works.« less
Prokhorov, Alexander; Prokhorova, Nina I
2012-11-20
We applied the bidirectional reflectance distribution function (BRDF) model consisting of diffuse, quasi-specular, and glossy components to the Monte Carlo modeling of spectral effective emissivities for nonisothermal cavities. A method for extension of a monochromatic three-component (3C) BRDF model to a continuous spectral range is proposed. The initial data for this method are the BRDFs measured in the plane of incidence at a single wavelength and several incidence angles and directional-hemispherical reflectance measured at one incidence angle within a finite spectral range. We proposed the Monte Carlo algorithm for calculation of spectral effective emissivities for nonisothermal cavities whose internal surface is described by the wavelength-dependent 3C BRDF model. The results obtained for a cylindroconical nonisothermal cavity are discussed and compared with results obtained using the conventional specular-diffuse model.
Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses
NASA Astrophysics Data System (ADS)
Rutkowska, Eva; Baker, Colin; Nahum, Alan
2010-04-01
A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.
Measurement and modeling of diameter distributions of particulate matter in terrestrial solutions
NASA Astrophysics Data System (ADS)
Levia, Delphis F.; Michalzik, Beate; Bischoff, Sebastian; NäThe, Kerstin; Legates, David R.; Gruselle, Marie-Cecile; Richter, Susanne
2013-04-01
Particulate matter (PM) plays an important role in biogeosciences, affecting biosphere-atmosphere interactions and ecosystem health. This is the first known study to quantify and model PM diameter distributions of bulk precipitation, throughfall, stemflow, and organic layer (Oa) solution. Solutions were collected from a European beech (Fagus sylvatica L.) forest during leafed and leafless periods. Following scanning electron microscopy and image analysis, PM distributions were quantified and then modeled with the Box-Cox transformation. Based on an analysis of 43,278 individual particulates, median PM diameter of all solutions was around 3.0 µm. All PM diameter frequency distributions were skewed significantly to the right. Optimal power transformations of PM diameter distributions were between -1.00 and -1.56. The utility of this model reconstruction would be that large samples having a similar probability density function can be developed for similar forests. Further work on the shape and chemical composition of particulates is warranted.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail D.; Cairns, Brian; Mishchenko, Michael I.
2012-01-01
We present a novel technique for remote sensing of cloud droplet size distributions. Polarized reflectances in the scattering angle range between 135deg and 165deg exhibit a sharply defined rainbow structure, the shape of which is determined mostly by single scattering properties of cloud particles, and therefore, can be modeled using the Mie theory. Fitting the observed rainbow with such a model (computed for a parameterized family of particle size distributions) has been used for cloud droplet size retrievals. We discovered that the relationship between the rainbow structures and the corresponding particle size distributions is deeper than it had been commonly understood. In fact, the Mie theory-derived polarized reflectance as a function of reduced scattering angle (in the rainbow angular range) and the (monodisperse) particle radius appears to be a proxy to a kernel of an integral transform (similar to the sine Fourier transform on the positive semi-axis). This approach, called the rainbow Fourier transform (RFT), allows us to accurately retrieve the shape of the droplet size distribution by the application of the corresponding inverse transform to the observed polarized rainbow. While the basis functions of the proxy-transform are not exactly orthogonal in the finite angular range, this procedure needs to be complemented by a simple regression technique, which removes the retrieval artifacts. This non-parametric approach does not require any a priori knowledge of the droplet size distribution functional shape and is computationally fast (no look-up tables, no fitting, computations are the same as for the forward modeling).
Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum
NASA Astrophysics Data System (ADS)
Weitzel, Nils; Hense, Andreas; Ohlwein, Christian
2017-04-01
Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were performed in the PMIP3 project. The proxy data syntheses consist either of raw pollen data or of normally distributed climate data from preprocessed proxy records. Future extensions of our method contain the inclusion of other proxy types (transfer functions), the implementation of other spatial interpolation techniques, the use of age uncertainties, and the extension to spatio-temporal reconstructions of the last deglaciation. Our work is part of the PalMod project funded by the German Federal Ministry of Education and Science (BMBF).
An information hidden model holding cover distributions
NASA Astrophysics Data System (ADS)
Fu, Min; Cai, Chao; Dai, Zuxu
2018-03-01
The goal of steganography is to embed secret data into a cover so no one apart from the sender and intended recipients can find the secret data. Usually, the way the cover changing was decided by a hidden function. There were no existing model could be used to find an optimal function which can greatly reduce the distortion the cover suffered. This paper considers the cover carrying secret message as a random Markov chain, taking the advantages of a deterministic relation between initial distributions and transferring matrix of the Markov chain, and takes the transferring matrix as a constriction to decrease statistical distortion the cover suffered in the process of information hiding. Furthermore, a hidden function is designed and the transferring matrix is also presented to be a matrix from the original cover to the stego cover. Experiment results show that the new model preserves a consistent statistical characterizations of original and stego cover.
Relaxation of ferroelectric states in 2D distributions of quantum dots: EELS simulation
NASA Astrophysics Data System (ADS)
Cortés, C. M.; Meza-Montes, L.; Moctezuma, R. E.; Carrillo, J. L.
2016-06-01
The relaxation time of collective electronic states in a 2D distribution of quantum dots is investigated theoretically by simulating EELS experiments. From the numerical calculation of the probability of energy loss of an electron beam, traveling parallel to the distribution, it is possible to estimate the damping time of ferroelectric-like states. We generate this collective response of the distribution by introducing a mean field interaction among the quantum dots, and then, the model is extended incorporating effects of long-range correlations through a Bragg-Williams approximation. The behavior of the dielectric function, the energy loss function, and the relaxation time of ferroelectric-like states is then investigated as a function of the temperature of the distribution and the damping constant of the electronic states in the single quantum dots. The robustness of the trends and tendencies of our results indicate that this scheme of analysis can guide experimentalists to develop tailored quantum dots distributions for specific applications.
NASA Astrophysics Data System (ADS)
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.
2018-04-01
Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.
Magnetic intermittency of solar wind turbulence in the dissipation range
NASA Astrophysics Data System (ADS)
Pei, Zhongtian; He, Jiansen; Tu, Chuanyi; Marsch, Eckart; Wang, Linghua
2016-04-01
The feature, nature, and fate of intermittency in the dissipation range are an interesting topic in the solar wind turbulence. We calculate the distribution of flatness for the magnetic field fluctuations as a functionof angle and scale. The flatness distribution shows a "butterfly" pattern, with two wings located at angles parallel/anti-parallel to local mean magnetic field direction and main body located at angles perpendicular to local B0. This "butterfly" pattern illustrates that the flatness profile in (anti-) parallel direction approaches to the maximum value at larger scale and drops faster than that in perpendicular direction. The contours for probability distribution functions at different scales illustrate a "vase" pattern, more clear in parallel direction, which confirms the scale-variation of flatness and indicates the intermittency generation and dissipation. The angular distribution of structure function in the dissipation range shows an anisotropic pattern. The quasi-mono-fractal scaling of structure function in the dissipation range is also illustrated and investigated with the mathematical model for inhomogeneous cascading (extended p-model). Different from the inertial range, the extended p-model for the dissipation range results in approximate uniform fragmentation measure. However, more complete mathematicaland physical model involving both non-uniform cascading and dissipation is needed. The nature of intermittency may be strong structures or large amplitude fluctuations, which may be tested with magnetic helicity. In one case study, we find the heating effect in terms of entropy for large amplitude fluctuations seems to be more obvious than strong structures.
NASA Astrophysics Data System (ADS)
Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin
2018-01-01
The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.
Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe
2016-01-01
Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young’s modulus values. For each combination of these variables, the explicit equation of the porosity distribution law–i.e the law that describes the pore dimensions in function of the spatial coordinates–was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry of functionally graded scaffolds based on mechanobiological criteria. PMID:26771746
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Using beta binomials to estimate classification uncertainty for ensemble models.
Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin
2014-01-01
Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.
Radiative transfer modeling applied to sea water constituent determination. [Gulf of Mexico
NASA Technical Reports Server (NTRS)
Faller, K. H.
1979-01-01
Optical radiation from the sea is influenced by pigments dissolved in the water and contained in discrete organisms suspended in the sea, and by pigmented and unpigmented inorganic and organic particles. The problem of extracting the information concerning these pigments and particulates from the optical properties of the sea is addressed and the properties which determine characteristics of the radiation that a remote sensor will detect and measure are considered. The results of the application of the volume scattering function model to the data collected in the Gulf of Mexico and its environs indicate that the size distribution of the concentrations of particles found in the sea can be predicted from measurements of the volume scattering function. Furthermore, with the volume scattering function model and knowledge of the absorption spectra of dissolved pigments, the radiative transfer model can compute a distribution of particle sizes and indices of refraction and concentration of dissolved pigments that give an upwelling light spectrum that closely matches measurements of that spectrum at sea.
Multistage degradation modeling for BLDC motor based on Wiener process
NASA Astrophysics Data System (ADS)
Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai
2018-05-01
Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.
NASA Astrophysics Data System (ADS)
Crosby, N.; Georgoulis, M.; Vilmer, N.
1999-10-01
Solar burst observations in the deka-keV energy range originating from the WATCH experiment aboard the GRANAT spacecraft were used to perform frequency distributions built on measured X-ray flare parameters (Crosby et al., 1998). The results of the study show that: 1- the overall distribution functions are robust power laws extending over a number of decades. The typical parameters of events (total counts, peak count rates, duration) are all correlated to each other. 2- the overall distribution functions are the convolution of significantly different distribution functions built on parts of the whole data set filtered by the event duration. These "partial" frequency distributions are still power law distributions over several decades, with a slope systematically decreasing with increasing duration. 3- No correlation is found between the elapsed time interval between successive bursts arising from the same active region and the peak intensity of the flare. In this paper, we attempt a tentative comparison between the statistical properties of the self-organized critical (SOC) cellular automaton statistical flare models (see e.g. Lu and Hamilton (1991), Georgoulis and Vlahos (1996, 1998)) and the respective properties of the WATCH flare data. Despite the inherent weaknesses of the SOC models to simulate a number of physical processes in the active region, it is found that most of the observed statistical properties can be reproduced using the SOC models, including the various frequency distributions and scatter plots. We finally conclude that, even if SOC models must be refined to improve the physical links to MHD approaches, they nevertheless represent a good approach to describe the properties of rapid energy dissipation and magnetic field annihilation in complex and magnetized plasmas. Crosby N., Vilmer N., Lund N. and Sunyaev R., A&A; 334; 299-313; 1998 Crosby N., Lund N., Vilmer N. and Sunyaev R.; A&A Supplement Series; 130, 233, 1998 Georgoulis M. and Vlahos L., 1996, Astrophy. J. Letters, 469, L135 Georgoulis M. and Vlahos L., 1998, in preparation Lu E.T. and Hamilton R.J., 1991, Astroph. J., 380, L89
Constraining the noise-free distribution of halo spin parameters
NASA Astrophysics Data System (ADS)
Benson, Andrew J.
2017-11-01
Any measurement made using an N-body simulation is subject to noise due to the finite number of particles used to sample the dark matter distribution function, and the lack of structure below the simulation resolution. This noise can be particularly significant when attempting to measure intrinsically small quantities, such as halo spin. In this work, we develop a model to describe the effects of particle noise on halo spin parameters. This model is calibrated using N-body simulations in which the particle noise can be treated as a Poisson process on the underlying dark matter distribution function, and we demonstrate that this calibrated model reproduces measurements of halo spin parameter error distributions previously measured in N-body convergence studies. Utilizing this model, along with previous measurements of the distribution of halo spin parameters in N-body simulations, we place constraints on the noise-free distribution of halo spins. We find that the noise-free median spin is 3 per cent lower than that measured directly from the N-body simulation, corresponding to a shift of approximately 40 times the statistical uncertainty in this measurement arising purely from halo counting statistics. We also show that measurement of the spin of an individual halo to 10 per cent precision requires at least 4 × 104 particles in the halo - for haloes containing 200 particles, the fractional error on spins measured for individual haloes is of order unity. N-body simulations should be viewed as the results of a statistical experiment applied to a model of dark matter structure formation. When viewed in this way, it is clear that determination of any quantity from such a simulation should be made through forward modelling of the effects of particle noise.
Lévy flight with absorption: A model for diffusing diffusivity with long tails
NASA Astrophysics Data System (ADS)
Jain, Rohit; Sebastian, K. L.
2017-03-01
We consider diffusion of a particle in rearranging environment, so that the diffusivity of the particle is a stochastic function of time. In our previous model of "diffusing diffusivity" [Jain and Sebastian, J. Phys. Chem. B 120, 3988 (2016), 10.1021/acs.jpcb.6b01527], it was shown that the mean square displacement of particle remains Fickian, i.e.,
NASA Astrophysics Data System (ADS)
Zheng, Z. M.; Wang, B.
2018-06-01
Conventional heat transfer fluids usually have low thermal conductivity, limiting their efficiency in many applications. Many experiments have shown that adding nanosize solid particles to conventional fluids can greatly enhance their thermal conductivity. To explain this anomalous phenomenon, many theoretical investigations have been conducted in recent years. Some of this research has indicated that the particle agglomeration effect that commonly occurs in nanofluids should play an important role in such enhancement of the thermal conductivity, while some have shown that the enhancement of the effective thermal conductivity might be accounted for by the structure of nanofluids, which can be described using the radial distribution function of particles. However, theoretical predictions from these studies are not in very good agreement with experimental results. This paper proposes a prediction model for the effective thermal conductivity of nanofluids, considering both the agglomeration effect and the radial distribution function of nanoparticles. The resulting theoretical predictions for several sets of nanofluids are highly consistent with experimental data.
Dynamic data driven bidirectional reflectance distribution function measurement system
NASA Astrophysics Data System (ADS)
Nauyoks, Stephen E.; Freda, Sam; Marciniak, Michael A.
2014-09-01
The bidirectional reflectance distribution function (BRDF) is a fitted distribution function that defines the scatter of light off of a surface. The BRDF is dependent on the directions of both the incident and scattered light. Because of the vastness of the measurement space of all possible incident and reflected directions, the calculation of BRDF is usually performed using a minimal amount of measured data. This may lead to poor fits and uncertainty in certain regions of incidence or reflection. A dynamic data driven application system (DDDAS) is a concept that uses an algorithm on collected data to influence the collection space of future data acquisition. The authors propose a DDD-BRDF algorithm that fits BRDF data as it is being acquired and uses on-the-fly fittings of various BRDF models to adjust the potential measurement space. In doing so, it is hoped to find the best model to fit a surface and the best global fit of the BRDF with a minimum amount of collection space.
Systematics of capture and fusion dynamics in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui
2017-03-01
We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.
Ferrarini, Luca; Veer, Ilya M; van Lew, Baldur; Oei, Nicole Y L; van Buchem, Mark A; Reiber, Johan H C; Rombouts, Serge A R B; Milles, J
2011-06-01
In recent years, graph theory has been successfully applied to study functional and anatomical connectivity networks in the human brain. Most of these networks have shown small-world topological characteristics: high efficiency in long distance communication between nodes, combined with highly interconnected local clusters of nodes. Moreover, functional studies performed at high resolutions have presented convincing evidence that resting-state functional connectivity networks exhibits (exponentially truncated) scale-free behavior. Such evidence, however, was mostly presented qualitatively, in terms of linear regressions of the degree distributions on log-log plots. Even when quantitative measures were given, these were usually limited to the r(2) correlation coefficient. However, the r(2) statistic is not an optimal estimator of explained variance, when dealing with (truncated) power-law models. Recent developments in statistics have introduced new non-parametric approaches, based on the Kolmogorov-Smirnov test, for the problem of model selection. In this work, we have built on this idea to statistically tackle the issue of model selection for the degree distribution of functional connectivity at rest. The analysis, performed at voxel level and in a subject-specific fashion, confirmed the superiority of a truncated power-law model, showing high consistency across subjects. Moreover, the most highly connected voxels were found to be consistently part of the default mode network. Our results provide statistically sound support to the evidence previously presented in literature for a truncated power-law model of resting-state functional connectivity. Copyright © 2010 Elsevier Inc. All rights reserved.
Modeling vibration response and damping of cables and cabled structures
NASA Astrophysics Data System (ADS)
Spak, Kaitlin S.; Agnes, Gregory S.; Inman, Daniel J.
2015-02-01
In an effort to model the vibration response of cabled structures, the distributed transfer function method is developed to model cables and a simple cabled structure. The model includes shear effects, tension, and hysteretic damping for modeling of helical stranded cables, and includes a method for modeling cable attachment points using both linear and rotational damping and stiffness. The damped cable model shows agreement with experimental data for four types of stranded cables, and the damped cabled beam model shows agreement with experimental data for the cables attached to a beam structure, as well as improvement over the distributed mass method for cabled structure modeling.
Occupation times and ergodicity breaking in biased continuous time random walks
NASA Astrophysics Data System (ADS)
Bel, Golan; Barkai, Eli
2005-12-01
Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
Phenomenological model to fit complex permittivity data of water from radio to optical frequencies.
Shubitidze, Fridon; Osterberg, Ulf
2007-04-01
A general factorized form of the dielectric function together with a fractional model-based parameter estimation method is used to provide an accurate analytical formula for the complex refractive index in water for the frequency range 10(8)-10(16)Hz . The analytical formula is derived using a combination of a microscopic frequency-dependent rational function for adjusting zeros and poles of the dielectric dispersion together with the macroscopic statistical Fermi-Dirac distribution to provide a description of both the real and imaginary parts of the complex permittivity for water. The Fermi-Dirac distribution allows us to model the dramatic reduction in the imaginary part of the permittivity in the visible window of the water spectrum.
A Kinetic Study of Microwave Start-up of Tokamak Plasmas
NASA Astrophysics Data System (ADS)
du Toit, E. J.; O'Brien, M. R.; Vann, R. G. L.
2017-07-01
A kinetic model for studying the time evolution of the distribution function for microwave startup is presented. The model for the distribution function is two dimensional in momentum space, but, for simplicity and rapid calculations, has no spatial dependence. Experiments on the Mega Amp Spherical Tokamak have shown that the plasma current is carried mainly by electrons with energies greater than 70 keV, and effects thought to be important in these experiments are included, i.e. particle sources, orbital losses, the loop voltage and microwave heating, with suitable volume averaging where necessary to give terms independent of spatial dimensions. The model predicts current carried by electrons with the same energies as inferred from the experiments, though the current drive efficiency is smaller.
Approximation of Optimal Infinite Dimensional Compensators for Flexible Structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Mingori, D. L.; Adamian, A.; Jabbari, F.
1985-01-01
The infinite dimensional compensator for a large class of flexible structures, modeled as distributed systems are discussed, as well as an approximation scheme for designing finite dimensional compensators to approximate the infinite dimensional compensator. The approximation scheme is applied to develop a compensator for a space antenna model based on wrap-rib antennas being built currently. While the present model has been simplified, it retains the salient features of rigid body modes and several distributed components of different characteristics. The control and estimator gains are represented by functional gains, which provide graphical representations of the control and estimator laws. These functional gains also indicate the convergence of the finite dimensional compensators and show which modes the optimal compensator ignores.
NASA Astrophysics Data System (ADS)
Siddiqui, Maheen; Wedemann, Roseli S.; Jensen, Henrik Jeldtoft
2018-01-01
We explore statistical characteristics of avalanches associated with the dynamics of a complex-network model, where two modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's ideas regarding the neuroses and that consciousness is related with symbolic and linguistic memory activity in the brain. It incorporates the Stariolo-Tsallis generalization of the Boltzmann Machine in order to model memory retrieval and associativity. In the present work, we define and measure avalanche size distributions during memory retrieval, in order to gain insight regarding basic aspects of the functioning of these complex networks. The avalanche sizes defined for our model should be related to the time consumed and also to the size of the neuronal region which is activated, during memory retrieval. This allows the qualitative comparison of the behaviour of the distribution of cluster sizes, obtained during fMRI measurements of the propagation of signals in the brain, with the distribution of avalanche sizes obtained in our simulation experiments. This comparison corroborates the indication that the Nonextensive Statistical Mechanics formalism may indeed be more well suited to model the complex networks which constitute brain and mental structure.
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Dalkilic, Turkan Erbay; Apaydin, Aysen
2009-11-01
In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.
NASA Astrophysics Data System (ADS)
Milani, Armin Ebrahimi; Haghifam, Mahmood Reza
2008-10-01
The reconfiguration is an operation process used for optimization with specific objectives by means of changing the status of switches in a distribution network. In this paper each objectives is normalized with inspiration from fuzzy sets-to cause optimization more flexible- and formulized as a unique multi-objective function. The genetic algorithm is used for solving the suggested model, in which there is no risk of non-liner objective functions and constraints. The effectiveness of the proposed method is demonstrated through the examples.
A quantum anharmonic oscillator model for the stock market
NASA Astrophysics Data System (ADS)
Gao, Tingting; Chen, Yu
2017-02-01
A financially interpretable quantum model is proposed to study the probability distributions of the stock price return. The dynamics of a quantum particle is considered an analog of the motion of stock price. Then the probability distributions of price return can be computed from the wave functions that evolve according to Schrodinger equation. Instead of a harmonic oscillator in previous studies, a quantum anharmonic oscillator is applied to the stock in liquid market. The leptokurtic distributions of price return can be reproduced by our quantum model with the introduction of mixed-state and multi-potential. The trend following dominant market, in which the price return follows a bimodal distribution, is discussed as a specific case of the illiquid market.
A deterministic width function model
NASA Astrophysics Data System (ADS)
Puente, C. E.; Sivakumar, B.
Use of a deterministic fractal-multifractal (FM) geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States), that the FM approach may also be used to closely approximate existing width functions.
Surface Material Characterization from Non-resolved Multi-band Optical Observations
2012-09-01
functions ( BRDFs ) — then a forward model of the spectral signature of the entire body could be constructed by summing contributions from all reflecting...buffering). 3.3.2 Material Bi-directional Reflectance Distribution Functions ( BRDFs ) Notably, the satellite wire-frame and attitude models together...environments and/or created numerical BRDF models . For instance, BRDFs for several spacecraft materials — such as solar array panels, milled aluminum
Generalised Central Limit Theorems for Growth Rate Distribution of Complex Systems
NASA Astrophysics Data System (ADS)
Takayasu, Misako; Watanabe, Hayafumi; Takayasu, Hideki
2014-04-01
We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems. Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Sanderson, A. C.
1994-01-01
Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.
NASA Astrophysics Data System (ADS)
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-29
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
NASA Astrophysics Data System (ADS)
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
An airport community noise-impact assessment model
NASA Technical Reports Server (NTRS)
Deloach, R.
1980-01-01
A computer model was developed to assess the noise impact of an airport on the community which it serves. Assessments are made using the Fractional Impact Method by which a single number describes the community aircraft noise environment in terms of exposed population and multiple event noise level. The model is comprised of three elements: a conventional noise footprint model, a site specific population distribution model, and a dose response transfer function. The footprint model provides the noise distribution for a given aircraft operating scenario. This is combined with the site specific population distribution obtained from a national census data base to yield the number of residents exposed to a given level of noise. The dose response relationship relates noise exposure levels to the percentage of individuals highly annoyed by those levels.
Functional Relationships and Regression Analysis.
ERIC Educational Resources Information Center
Preece, Peter F. W.
1978-01-01
Using a degenerate multivariate normal model for the distribution of organismic variables, the form of least-squares regression analysis required to estimate a linear functional relationship between variables is derived. It is suggested that the two conventional regression lines may be considered to describe functional, not merely statistical,…
General models for the distributions of electric field gradients in disordered solids
NASA Astrophysics Data System (ADS)
LeCaër, G.; Brand, R. A.
1998-11-01
Hyperfine studies of disordered materials often yield the distribution of the electric field gradient (EFG) or related quadrupole splitting (QS). The question of the structural information that may be extracted from such distributions has been considered for more than fifteen years. Experimentally most studies have been performed using Mössbauer spectroscopy, especially on 0953-8984/10/47/020/img5. However, NMR, NQR, EPR and PAC methods have also received some attention. The EFG distribution for a random distribution of electric charges was for instance first investigated by Czjzek et al [1] and a general functional form was derived for the joint (bivariate) distribution of the principal EFG tensor component 0953-8984/10/47/020/img6 and the asymmetry parameter 0953-8984/10/47/020/img7. The importance of the Gauss distribution for such rotationally invariant structural models was thus evidenced. Extensions of that model which are based on degenerate multivariate Gauss distributions for the elements of the EFG tensor were proposed by Czjzek. The latter extensions have been used since that time, more particularly in Mössbauer spectroscopy, under the name `shell models'. The mathematical foundations of all the previous models are presented and critically discussed as they are evidenced by simple calculations in the case of the EFG tensor. The present article only focuses on those aspects of the EFG distribution in disordered solids which can be discussed without explicitly looking at particular physical mechanisms. We present studies of three different model systems. A reference model directly related to the first model of Czjzek, called the Gaussian isotropic model (GIM), is shown to be the limiting case for many different models with a large number of independent contributions to the EFG tensor and not restricted to a point-charge model. The extended validity of the marginal distribution of 0953-8984/10/47/020/img7 in the GIM model is discussed. It is also shown that the second model based on degenerate multivariate normal distributions for the EFG components yields questionable results and has been exaggeratedly used in experimental studies. The latter models are further discussed in the light of new results. The problems raised by these extensions are due to the fact that the consequences of the statistical invariance by rotation of the EFG tensor have not been sufficiently taken into account. Further difficulties arise because the structural degrees of freedom of the disordered solid under consideration have been confused with the degrees of freedom of QS distributions. The relations which are derived and discussed are further illustrated by the case of the EFG tensor distribution created at the centre of a sphere by m charges randomly distributed on its surface. The third model, a simple extension of the GIM, considers the case of an EFG tensor which is the sum of a fixed part and of a random part with variable weights. The bivariate distribution 0953-8984/10/47/020/img9 is calculated exactly in the most symmetric case and the effect of the random part is investigated as a function of its weight. The various models are more particularly discussed in connection with short-range order in disordered solids. An ambiguity problem which arises in the evaluation of bivariate distributions of centre lineshift (isomer shift) and quadrupole splitting from 0953-8984/10/47/020/img10 Mössbauer spectra is finally quantitatively considered.
Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass
NASA Astrophysics Data System (ADS)
Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard
2013-06-01
The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant CQ ∝ |Vzz| and the asymmetry parameter ηQ that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.
Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass.
Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard
2013-06-26
The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant C(Q) is proportional to |V(zz)| and the asymmetry parameter η(Q) that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.
Deane, David C; Nicol, Jason M; Gehrig, Susan L; Harding, Claire; Aldridge, Kane T; Goodman, Abigail M; Brookes, Justin D
2017-06-01
Human use of water resources threatens environmental water supplies. If resource managers are to develop policies that avoid unacceptable ecological impacts, some means to predict ecosystem response to changes in water availability is necessary. This is difficult to achieve at spatial scales relevant for water resource management because of the high natural variability in ecosystem hydrology and ecology. Water plant functional groups classify species with similar hydrological niche preferences together, allowing a qualitative means to generalize community responses to changes in hydrology. We tested the potential for functional groups in making quantitative prediction of water plant functional group distributions across diverse wetland types over a large geographical extent. We sampled wetlands covering a broad range of hydrogeomorphic and salinity conditions in South Australia, collecting both hydrological and floristic data from 687 quadrats across 28 wetland hydrological gradients. We built hydrological-niche models for eight water plant functional groups using a range of candidate models combining different surface inundation metrics. We then tested the predictive performance of top-ranked individual and averaged models for each functional group. Cross validation showed that models achieved acceptable predictive performance, with correct classification rates in the range 0.68-0.95. Model predictions can be made at any spatial scale that hydrological data are available and could be implemented in a geographical information system. We show the response of water plant functional groups to inundation is consistent enough across diverse wetland types to quantify the probability of hydrological impacts over regional spatial scales. © 2017 by the Ecological Society of America.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
Effects of the gap slope on the distribution of removal rate in Belt-MRF.
Wang, Dekang; Hu, Haixiang; Li, Longxiang; Bai, Yang; Luo, Xiao; Xue, Donglin; Zhang, Xuejun
2017-10-30
Belt magnetorheological finishing (Belt-MRF) is a promising tool for large-optics processing. However, before using a spot, its shape should be designed and controlled by the polishing gap. Previous research revealed a remarkably nonlinear relationship between the removal function and normal pressure distribution. The pressure is nonlinearly related to the gap geometry, precluding prediction of the removal function given the polishing gap. Here, we used the concepts of gap slope and virtual ribbon to develop a model of removal profiles in Belt-MRF. Between the belt and the workpiece in the main polishing area, a gap which changes linearly along the flow direction was created using a flat-bottom magnet box. The pressure distribution and removal function were calculated. Simulations were consistent with experiments. Different removal functions, consistent with theoretical calculations, were obtained by adjusting the gap slope. This approach allows to predict removal functions in Belt-MRF.
Optimal Sensor Placement in Active Multistatic Sonar Networks
2014-06-01
As b→ 0, the Fermi function approaches the cookie cutter model. 1Discovered in 1926 by Enrico Fermi and Paul Dirac when researching electron...Thesis Co-Advisors: Emily M. Craparo Craig W. Rasmussen Second Reader: Mümtaz Karataş Approved for public release; distribution is unlimited THIS PAGE...A. 12a. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200
NASA Astrophysics Data System (ADS)
Kainulainen, J.; Juvela, M.; Alves, J.
2007-06-01
The giant molecular clouds (GMCs) of external galaxies can be mapped with sub-arcsecond resolution using multiband observations in the near-infrared. However, the interpretation of the observed reddening and attenuation of light, and their transformation into physical quantities, is greatly hampered by the effects arising from the unknown geometry and the scattering of light by dust particles. We examine the relation between the observed near-infrared reddening and the column density of the dust clouds. In this paper we particularly assess the feasibility of deriving the mass function of GMCs from near-infrared color excess data. We perform Monte Carlo radiative transfer simulations with 3D models of stellar radiation and clumpy dust distributions. We include the scattered light in the models and calculate near-infrared color maps from the simulated data. The color maps are compared with the true line-of-sight density distributions of the models. We extract clumps from the color maps and compare the observed mass function to the true mass function. For the physical configuration chosen in this study, essentially a face-on geometry, the observed mass function is a non-trivial function of the true mass function with a large number of parameters affecting its exact form. The dynamical range of the observed mass function is confined to 103.5dots 105.5 M_⊙ regardless of the dynamical range of the true mass function. The color maps are more sensitive in detecting the high-mass end of the mass function, and on average the masses of clouds are underestimated by a factor of ˜ 10 depending on the parameters describing the dust distribution. A significant fraction of clouds is expected to remain undetected at all masses. The simulations show that the cloud mass function derived from JHK color excess data using simple foreground screening geometry cannot be regarded as a one-to-one tracer of the underlying mass function.
Effect of particle stiffness on contact dynamics and rheology in a dense granular flow
NASA Astrophysics Data System (ADS)
Bharathraj, S.; Kumaran, V.
2018-01-01
Dense granular flows have been well described by the Bagnold rheology, even when the particles are in the multibody contact regime and the coordination number is greater than 1. This is surprising, because the Bagnold law should be applicable only in the instantaneous collision regime, where the time between collisions is much larger than the period of a collision. Here, the effect of particle stiffness on rheology is examined. It is found that there is a rheological threshold between a particle stiffness of 104-105 for the linear contact model and 105-106 for the Hertzian contact model above which Bagnold rheology (stress proportional to square of the strain rate) is valid and below which there is a power-law rheology, where all components of the stress and the granular temperature are proportional to a power of the strain rate that is less then 2. The system is in the multibody contact regime at the rheological threshold. However, the contact energy per particle is less than the kinetic energy per particle above the rheological threshold, and it becomes larger than the kinetic energy per particle below the rheological threshold. The distribution functions for the interparticle forces and contact energies are also analyzed. The distribution functions are invariant with height, but they do depend on the contact model. The contact energy distribution functions are well fitted by Gamma distributions. There is a transition in the shape of the distribution function as the particle stiffness is decreased from 107 to 106 for the linear model and 108 to 107 for the Hertzian model, when the contact number exceeds 1. Thus, the transition in the distribution function correlates to the contact regime threshold from the binary to multibody contact regime, and is clearly different from the rheological threshold. An order-disorder transition has recently been reported in dense granular flows. The Bagnold rheology applies for both the ordered and disordered states, even though the rheological constants differ by orders of magnitude. The effect of particle stiffness on the order-disorder transition is examined here. It is found that when the particle stiffness is above the rheological threshold, there is an order-disorder transition as the base roughness is increased. The order-disorder transition disappears after the crossover to the soft-particle regime when the particle stiffness is decreased below the rheological threshold, indicating that the transition is a hard-particle phenomenon.
Effect of particle stiffness on contact dynamics and rheology in a dense granular flow.
Bharathraj, S; Kumaran, V
2018-01-01
Dense granular flows have been well described by the Bagnold rheology, even when the particles are in the multibody contact regime and the coordination number is greater than 1. This is surprising, because the Bagnold law should be applicable only in the instantaneous collision regime, where the time between collisions is much larger than the period of a collision. Here, the effect of particle stiffness on rheology is examined. It is found that there is a rheological threshold between a particle stiffness of 10^{4}-10^{5} for the linear contact model and 10^{5}-10^{6} for the Hertzian contact model above which Bagnold rheology (stress proportional to square of the strain rate) is valid and below which there is a power-law rheology, where all components of the stress and the granular temperature are proportional to a power of the strain rate that is less then 2. The system is in the multibody contact regime at the rheological threshold. However, the contact energy per particle is less than the kinetic energy per particle above the rheological threshold, and it becomes larger than the kinetic energy per particle below the rheological threshold. The distribution functions for the interparticle forces and contact energies are also analyzed. The distribution functions are invariant with height, but they do depend on the contact model. The contact energy distribution functions are well fitted by Gamma distributions. There is a transition in the shape of the distribution function as the particle stiffness is decreased from 10^{7} to 10^{6} for the linear model and 10^{8} to 10^{7} for the Hertzian model, when the contact number exceeds 1. Thus, the transition in the distribution function correlates to the contact regime threshold from the binary to multibody contact regime, and is clearly different from the rheological threshold. An order-disorder transition has recently been reported in dense granular flows. The Bagnold rheology applies for both the ordered and disordered states, even though the rheological constants differ by orders of magnitude. The effect of particle stiffness on the order-disorder transition is examined here. It is found that when the particle stiffness is above the rheological threshold, there is an order-disorder transition as the base roughness is increased. The order-disorder transition disappears after the crossover to the soft-particle regime when the particle stiffness is decreased below the rheological threshold, indicating that the transition is a hard-particle phenomenon.
Sakschewski, Boris; von Bloh, Werner; Boit, Alice; Rammig, Anja; Kattge, Jens; Poorter, Lourens; Peñuelas, Josep; Thonicke, Kirsten
2015-01-22
Functional diversity is critical for ecosystem dynamics, stability and productivity. However, dynamic global vegetation models (DGVMs) which are increasingly used to simulate ecosystem functions under global change, condense functional diversity to plant functional types (PFTs) with constant parameters. Here, we develop an individual- and trait-based version of the DGVM LPJmL (Lund-Potsdam-Jena managed Land) called LPJmL- flexible individual traits (LPJmL-FIT) with flexible individual traits) which we apply to generate plant trait maps for the Amazon basin. LPJmL-FIT incorporates empirical ranges of five traits of tropical trees extracted from the TRY global plant trait database, namely specific leaf area (SLA), leaf longevity (LL), leaf nitrogen content (N area ), the maximum carboxylation rate of Rubisco per leaf area (vcmaxarea), and wood density (WD). To scale the individual growth performance of trees, the leaf traits are linked by trade-offs based on the leaf economics spectrum, whereas wood density is linked to tree mortality. No preselection of growth strategies is taking place, because individuals with unique trait combinations are uniformly distributed at tree establishment. We validate the modeled trait distributions by empirical trait data and the modeled biomass by a remote sensing product along a climatic gradient. Including trait variability and trade-offs successfully predicts natural trait distributions and achieves a more realistic representation of functional diversity at the local to regional scale. As sites of high climatic variability, the fringes of the Amazon promote trait divergence and the coexistence of multiple tree growth strategies, while lower plant trait diversity is found in the species-rich center of the region with relatively low climatic variability. LPJmL-FIT enables to test hypotheses on the effects of functional biodiversity on ecosystem functioning and to apply the DGVM to current challenges in ecosystem management from local to global scales, that is, deforestation and climate change effects. © 2015 John Wiley & Sons Ltd.
Sekulić, Vladislav; Skinner, Frances K
2017-01-01
Although biophysical details of inhibitory neurons are becoming known, it is challenging to map these details onto function. Oriens-lacunosum/moleculare (O-LM) cells are inhibitory cells in the hippocampus that gate information flow, firing while phase-locked to theta rhythms. We build on our existing computational model database of O-LM cells to link model with function. We place our models in high-conductance states and modulate inhibitory inputs at a wide range of frequencies. We find preferred spiking recruitment of models at high (4–9 Hz) or low (2–5 Hz) theta depending on, respectively, the presence or absence of h-channels on their dendrites. This also depends on slow delayed-rectifier potassium channels, and preferred theta ranges shift when h-channels are potentiated by cyclic AMP. Our results suggest that O-LM cells can be differentially recruited by frequency-modulated inputs depending on specific channel types and distributions. This work exposes a strategy for understanding how biophysical characteristics contribute to function. DOI: http://dx.doi.org/10.7554/eLife.22962.001 PMID:28318488
On the mass function of stars growing in a flocculent medium
NASA Astrophysics Data System (ADS)
Maschberger, Th.
2013-12-01
Stars form in regions of very inhomogeneous densities and may have chaotic orbital motions. This leads to a time variation of the accretion rate, which will spread the masses over some mass range. We investigate the mass distribution functions that arise from fluctuating accretion rates in non-linear accretion, ṁ ∝ mα. The distribution functions evolve in time and develop a power-law tail attached to a lognormal body, like in numerical simulations of star formation. Small fluctuations may be modelled by a Gaussian and develop a power-law tail ∝ m-α at the high-mass side for α > 1 and at the low-mass side for α < 1. Large fluctuations require that their distribution is strictly positive, for example, lognormal. For positive fluctuations the mass distribution function develops the power-law tail always at the high-mass hand side, independent of α larger or smaller than unity. Furthermore, we discuss Bondi-Hoyle accretion in a supersonically turbulent medium, the range of parameters for which non-linear stochastic growth could shape the stellar initial mass function, as well as the effects of a distribution of initial masses and growth times.
Directional pair distribution function for diffraction line profile analysis of atomistic models
Leonardi, Alberto; Leoni, Matteo; Scardi, Paolo
2013-01-01
The concept of the directional pair distribution function is proposed to describe line broadening effects in powder patterns calculated from atomistic models of nano-polycrystalline microstructures. The approach provides at the same time a description of the size effect for domains of any shape and a detailed explanation of the strain effect caused by the local atomic displacement. The latter is discussed in terms of different strain types, also accounting for strain field anisotropy and grain boundary effects. The results can in addition be directly read in terms of traditional line profile analysis, such as that based on the Warren–Averbach method. PMID:23396818
Adiabatic description of long range frequency sweeping
NASA Astrophysics Data System (ADS)
Nyqvist, R. M.; Lilley, M. K.; Breizman, B. N.
2012-09-01
A theoretical framework is developed to describe long range frequency sweeping events in the 1D electrostatic bump-on-tail model with fast particle sources and collisions. The model includes three collision operators (Krook, drag (dynamical friction) and velocity space diffusion), and allows for a general shape of the fast particle distribution function. The behaviour of phase space holes and clumps is analysed in the absence of diffusion, and the effect of particle trapping due to separatrix expansion is discussed. With a fast particle distribution function whose slope decays above the resonant phase velocity, hooked frequency sweeping is found for holes in the presence of drag collisions alone.
Time distribution of heavy rainfall events in south west of Iran
NASA Astrophysics Data System (ADS)
Ghassabi, Zahra; kamali, G. Ali; Meshkatee, Amir-Hussain; Hajam, Sohrab; Javaheri, Nasrolah
2016-07-01
Accurate knowledge of rainfall time distribution is a fundamental issue in many Meteorological-Hydrological studies such as using the information of the surface runoff in the design of the hydraulic structures, flood control and risk management, and river engineering studies. Since the main largest dams of Iran are in the south-west of the country (i.e. South Zagros), this research investigates the temporal rainfall distribution based on an analytical numerical method to increase the accuracy of hydrological studies in Iran. The United States Soil Conservation Service (SCS) estimated the temporal rainfall distribution in various forms. Hydrology studies usually utilize the same distribution functions in other areas of the world including Iran due to the lack of sufficient observation data. However, we first used Weather Research Forecasting (WRF) model to achieve the simulated rainfall results of the selected storms on south west of Iran in this research. Then, a three-parametric Logistic function was fitted to the rainfall data in order to compute the temporal rainfall distribution. The domain of the WRF model is 30.5N-34N and 47.5E-52.5E with a resolution of 0.08 degree in latitude and longitude. We selected 35 heavy storms based on the observed rainfall data set to simulate with the WRF Model. Storm events were scrutinized independently from each other and the best analytical three-parametric logistic function was fitted for each grid point. The results show that the value of the coefficient a of the logistic function, which indicates rainfall intensity, varies from the minimum of 0.14 to the maximum of 0.7. Furthermore, the values of the coefficient B of the logistic function, which indicates rain delay of grid points from start time of rainfall, vary from 1.6 in south-west and east to more than 8 in north and central parts of the studied area. In addition, values of rainfall intensities are lower in south west of IRAN than those of observed or proposed by the SCS values in the US.
Age at onset in bipolar I affective disorder in the USA and Europe.
Bellivier, Frank; Etain, Bruno; Malafosse, Alain; Henry, Chantal; Kahn, Jean-Pierre; Elgrabli-Wajsbrot, Orly; Jamain, Stéphane; Azorin, Jean-Michel; Frank, Ellen; Scott, Jan; Grochocinski, Victoria; Kupfer, David J; Golmard, Jean-Louis; Leboyer, Marion
2014-07-01
To test for differences in reported age at onset (AAO) of bipolar I affective disorder in clinical samples drawn from Europe and the USA. Admixture analysis was used to identify the model best fitting the observed AAO distributions of two large samples of bipolar I patients from Europe and USA (n = 3616 and n = 2275, respectively). Theoretical AAO functions were compared between the two samples. The model best fitting the observed distribution of AAO in both samples was a mixture of three Gaussian distributions. The theoretical AAO functions of bipolar I disorder differed significantly between the European and USA populations, with further analyses indicating that (i) the proportion of patients belonging to the early-onset subgroup was higher in the USA sample (63 vs. 25%) and (ii) mean age at onset (±SD) in the early-onset subgroup was lower for the USA sample (14.5 ± 4.9 vs. 19 ± 2.7 years). The models best describing the reported AAO distributions of European and USA bipolar I patients were remarkably stable. The intermediate- and late-onset subgroups had similar characteristics in the two samples. However, the theoretical AAO function differed significantly between the USA and European samples due to the higher proportion of patients in the early-onset subgroup and the lower mean age-at-onset in the USA sample.
Glyph-based analysis of multimodal directional distributions in vector field ensembles
NASA Astrophysics Data System (ADS)
Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger
2015-04-01
Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.
Incoherent vector mesons production in PbPb ultraperipheral collisions at the LHC
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2017-03-01
The incoherent rapidity distributions of vector mesons are computed in dipole model in PbPb ultraperipheral collisions at the CERN Large Hadron Collider (LHC). The IIM model fitted from newer data is employed in the dipole amplitude. The Boosted Gaussian and Gaus-LC wave functions for vector mesons are implemented in the calculations as well. Predictions for the J / ψ, ψ (2 s), ρ and ϕ incoherent rapidity distributions are evaluated and compared with experimental data and other theoretical predictions in this paper. We obtain closer predictions of the incoherent rapidity distributions for J / ψ than previous calculations in the IIM model.
Invariance in the recurrence of large returns and the validation of models of price dynamics
NASA Astrophysics Data System (ADS)
Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey
2013-08-01
Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.
Exclusive photoproduction of vector mesons in proton-lead ultraperipheral collisions at the LHC
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2018-02-01
Rapidity distributions of vector mesons are computed in dipole model proton-lead ultraperipheral collisions (UPCs) at the CERN Larger Hadron Collider (LHC). The dipole model framework is implemented in the calculations of cross sections in the photon-hadron interaction. The bCGC model and Boosted Gaussian wave functions are employed in the scattering amplitude. We obtain predictions of rapidity distributions of J / ψ meson proton-lead ultraperipheral collisions. The predictions give a good description to the experimental data of ALICE. The rapidity distributions of ϕ, ω and ψ (2 s) mesons in proton-lead ultraperipheral collisions are also presented in this paper.
LIMEPY: Lowered Isothermal Model Explorer in PYthon
NASA Astrophysics Data System (ADS)
Gieles, Mark; Zocchi, Alice
2017-10-01
LIMEPY solves distribution function (DF) based lowered isothermal models. It solves Poisson's equation used on input parameters and offers fast solutions for isotropic/anisotropic, single/multi-mass models, normalized DF values, density and velocity moments, projected properties, and generates discrete samples.
A Concept for Measuring Electron Distribution Functions Using Collective Thomson Scattering
NASA Astrophysics Data System (ADS)
Milder, A. L.; Froula, D. H.
2017-10-01
A.B. Langdon proposed that stable non-Maxwellian distribution functions are realized in coronal inertial confinement fusion plasmas via inverse bremsstrahlung heating. For Zvosc2
NASA Astrophysics Data System (ADS)
Sukhomlinov, V.; Mustafaev, A.; Timofeev, N.
2018-04-01
Previously developed methods based on the single-sided probe technique are altered and applied to measure the anisotropic angular spread and narrow energy distribution functions of charged particle (electron and ion) beams. The conventional method is not suitable for some configurations, such as low-voltage beam discharges, electron beams accelerated in near-wall and near-electrode layers, and vacuum electron beam sources. To determine the range of applicability of the proposed method, simple algebraic relationships between the charged particle energies and their angular distribution are obtained. The method is verified for the case of the collisionless mode of a low-voltage He beam discharge, where the traditional method for finding the electron distribution function with the help of a Legendre polynomial expansion is not applicable. This leads to the development of a physical model of the formation of the electron distribution function in a collisionless low-voltage He beam discharge. The results of a numerical calculation based on Monte Carlo simulations are in good agreement with the experimental data obtained using the new method.
Applying simulation model to uniform field space charge distribution measurements by the PEA method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Salama, M.M.A.
1996-12-31
Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, J.; Xue, X.
A comprehensive 3D CFD model is developed for a bi-electrode supported cell (BSC) SOFC. The model includes complicated transport phenomena of mass/heat transfer, charge (electron and ion) migration, and electrochemical reaction. The uniqueness of the modeling study is that functionally graded porous electrode property is taken into account, including not only linear but nonlinear porosity distributions. Extensive numerical analysis is performed to elucidate the effects of both porous microstructure distributions and operating condition on cell performance. Results indicate that cell performance is strongly dependent on both operating conditions and porous microstructure distributions of electrodes. Using the proposed fuel/gas feeding design,more » the uniform hydrogen distribution within porous anode is achieved; the oxygen distribution within the cathode is dependent on porous microstructure distributions as well as pressure loss conditions. Simulation results show that fairly uniform temperature distribution can be obtained with the proposed fuel/gas feeding design. The modeling results can be employed to guide experimental design of BSC test and provide pre-experimental analysis, as a result, to circumvent high cost associated with try-and-error experimental design and setup.« less
Hyde, M W; Schmidt, J D; Havrilla, M J
2009-11-23
A polarimetric bidirectional reflectance distribution function (pBRDF), based on geometrical optics, is presented. The pBRDF incorporates a visibility (shadowing/masking) function and a Lambertian (diffuse) component which distinguishes it from other geometrical optics pBRDFs in literature. It is shown that these additions keep the pBRDF bounded (and thus a more realistic physical model) as the angle of incidence or observation approaches grazing and better able to model the behavior of light scattered from rough, reflective surfaces. In this paper, the theoretical development of the pBRDF is shown and discussed. Simulation results of a rough, perfect reflecting surface obtained using an exact, electromagnetic solution and experimental Mueller matrix results of two, rough metallic samples are presented to validate the pBRDF.
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
NASA Astrophysics Data System (ADS)
Konca, A.
2013-12-01
A kinematic model for the Mw7.1 2011 Van Earthquake was obtained using regional, teleseismic and GPS data. One issue regarding regional data is that 1D Green's functions may not be appropriate due to complications in the upper mantle and crust that affects the Pnl waveforms. In order to resolve whether the 1D Green's function is appropriate, an aftershock of the main event was also modeled, which is then used as a criterion in the selection of the regional stations. The GPS data itself is not sufficient to obtain a slip model, but helps constrain the slip distribution. The slip distribution is up-dip and bilateral with more slip toward west, where the maximum slip reaches 4 meters. The rupture velocity is about 1.5 km/s.
A Cellular Automata Model of Bone Formation
Van Scoy, Gabrielle K.; George, Estee L.; Asantewaa, Flora Opoku; Kerns, Lucy; Saunders, Marnie M.; Prieto-Langarica, Alicia
2017-01-01
Bone remodeling is an elegantly orchestrated process by which osteocytes, osteoblasts and osteoclasts function as a syncytium to maintain or modify bone. On the microscopic level, bone consists of cells that create, destroy and monitor the bone matrix. These cells interact in a coordinated manner to maintain a tightly regulated homeostasis. It is this regulation that is responsible for the observed increase in bone gain in the dominant arm of a tennis player and the observed increase in bone loss associated with spaceflight and osteoporosis. The manner in which these cells interact to bring about a change in bone quality and quantity has yet to be fully elucidated. But efforts to understand the multicellular complexity can ultimately lead to eradication of metabolic bone diseases such as osteoporosis and improved implant longevity. Experimentally validated mathematical models that simulate functional activity and offer eventual predictive capabilities offer tremendous potential in understanding multicellular bone remodeling. Here we undertake the initial challenge to develop a mathematical model of bone formation validated with in vitro data obtained from osteoblastic bone cells induced to mineralize and quantified at 26 days of culture. A cellular automata model was constructed to simulate the in vitro characterization. Permutation tests were performed to compare the distribution of the mineralization in the cultures and the distribution of the mineralization in the mathematical models. The results of the permutation test show the distribution of mineralization from the characterization and mathematical model come from the same probability distribution, therefore validating the cellular automata model. PMID:28189632
Nucleation and growth in one dimension. I. The generalized Kolmogorov-Johnson-Mehl-Avrami model
NASA Astrophysics Data System (ADS)
Jun, Suckjoon; Zhang, Haiyang; Bechhoefer, John
2005-01-01
Motivated by a recent application of the Kolmogorov-Johnson-Mehl-Avrami (KJMA) model to the study of DNA replication, we consider the one-dimensional (1D) version of this model. We generalize previous work to the case where the nucleation rate is an arbitrary function I(t) and obtain analytical results for the time-dependent distributions of various quantities (such as the island distribution). We also present improved computer simulation algorithms to study the 1D KJMA model. The analytical results and simulations are in excellent agreement.
Electron-beam-charged dielectrics: Internal charge distribution
NASA Technical Reports Server (NTRS)
Beers, B. L.; Pine, V. W.
1981-01-01
Theoretical calculations of an electron transport model of the charging of dielectrics due to electron bombardment are compared to measurements of internal charge distributions. The emphasis is on the distribution of Teflon. The position of the charge centroid as a function of time is not monotonic. It first moves deeper into the material and then moves back near to the surface. In most time regimes of interest, the charge distribution is not unimodal, but instead has two peaks. The location of the centroid near saturation is a function of the incident current density. While the qualitative comparison of theory and experiment are reasonable, quantitative comparison shows discrepancies of as much as a factor of two.
Exact infinite-time statistics of the Loschmidt echo for a quantum quench.
Campos Venuti, Lorenzo; Jacobson, N Tobias; Santra, Siddhartha; Zanardi, Paolo
2011-07-01
The equilibration dynamics of a closed quantum system is encoded in the long-time distribution function of generic observables. In this Letter we consider the Loschmidt echo generalized to finite temperature, and show that we can obtain an exact expression for its long-time distribution for a closed system described by a quantum XY chain following a sudden quench. In the thermodynamic limit the logarithm of the Loschmidt echo becomes normally distributed, whereas for small quenches in the opposite, quasicritical regime, the distribution function acquires a universal double-peaked form indicating poor equilibration. These findings, obtained by a central limit theorem-type result, extend to completely general models in the small-quench regime.